markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Direct Associations Model
def learning_function(stimuli_shown, Λ, λ, training_or_test, prev_V, prev_Vbar, stimulus_type, α): Λbar = T.zeros_like(Λ) Λbar = T.inc_subtensor(Λbar[0,:], (prev_V[2,:] > 0) * (1 - Λ[0, :])) #Dcs Λbar = T.inc_subtensor(Λbar[1,:], (prev_V[1,:] > 0) * (1 - Λ[1, :])) #Ecs Λbar = T.inc_subtensor(Λbar[2,:], (prev_V[2,:] > 0) * (1 - Λ[2, :])) #F Λbar = T.inc_subtensor(Λbar[3,:], (prev_V[2,:] > 0) * (1 - Λ[3, :])) #S Λbar = T.inc_subtensor(Λbar[4,:], (prev_V[4,:] > 0) * (1 - Λ[4, :])) #G Λbar = T.inc_subtensor(Λbar[5,:], (prev_V[4,:] > 0) * (1 - Λ[5, :])) #H Λbar = T.inc_subtensor(Λbar[6,:], (prev_V[6,:] > 0) * (1 - Λ[6, :])) #Mcs Λbar = T.inc_subtensor(Λbar[7,:], (prev_V[7,:] > 0) * (1 - Λ[7, :])) #N Λbar = T.inc_subtensor(Λbar[8,:], (prev_V[7,:] > 0) * (1 - Λ[8, :])) #Ucs #λbar Scaling λbar = T.zeros_like(Λbar) λbar = T.inc_subtensor(λbar[0,:], prev_V[2,:]) #Dcs λbar = T.inc_subtensor(λbar[1,:], prev_V[1,:]) #Ecs λbar = T.inc_subtensor(λbar[2,:], prev_V[2,:]) #F λbar = T.inc_subtensor(λbar[3,:], prev_V[2,:]) #S λbar = T.inc_subtensor(λbar[4,:], prev_V[4,:]) #G λbar = T.inc_subtensor(λbar[5,:], prev_V[4,:]) #H λbar = T.inc_subtensor(λbar[6,:], prev_V[6,:]) #Mcs λbar = T.inc_subtensor(λbar[7,:], prev_V[7,:]) #N λbar = T.inc_subtensor(λbar[8,:], prev_V[7,:]) #Ucs pe_V = λ - prev_V pe_Vbar = λbar - prev_Vbar ΔV = Λ * (α) * pe_V ΔVbar = Λbar * (α) * pe_Vbar # Only update stimuli that were shown ΔV = ΔV * stimuli_shown ΔVbar = ΔVbar * stimuli_shown # Update V, Vbar V = T.zeros_like(prev_V) Vbar = T.zeros_like(prev_Vbar) # Only update V and Vbar for CSs. V = T.inc_subtensor(V[T.eq(stimulus_type, 1)], prev_V[T.eq(stimulus_type, 1)] + ΔV[T.eq(stimulus_type, 1)] * training_or_test) Vbar = T.inc_subtensor(Vbar[T.eq(stimulus_type, 1)], prev_Vbar[T.eq(stimulus_type, 1)] + ΔVbar[T.eq(stimulus_type, 1)] * training_or_test) return V, Vbar
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Generate Simulated Data with Model
n_stim = 9 n_subjects = len(data['ID'].unique()) #Initial values R = np.zeros((n_stim, n_subjects)) overall_R = np.zeros((1, n_subjects)) v_excitatory = np.zeros((n_stim, n_subjects)) v_inhibitory = np.zeros((n_stim, n_subjects)) #Randomized parameter values - use this if you want to compare simulated vs recovered parameters and comment out the "α = 1" code in the subsequent lines gen_dist = pm.Beta.dist(2, 2, shape = n_subjects) α_subject_sim = gen_dist.random() #α = 1 # gen_dist = pm.Beta.dist(2, 2, shape=n_subjects) # α_subject_sim = np.ones(n_subjects) #Test vs Training Trial training_or_test = data.pivot(index='trialseq', values='Test', columns='ID').values[:, np.newaxis, :].astype(float) #US values small_lambda = data.pivot(index='trialseq', values='US', columns='ID').values[:, np.newaxis, :].repeat(n_stim, axis=1).astype(float) stim_data = [] for sub in data['ID'].unique(): stim_data.append(data.loc[data['ID'] == sub, ['Dcs', 'Ecs', 'F', 'S', 'G', 'H', 'Mcs', 'N', 'Ucs']].values) stimuli_shown = np.dstack(stim_data) big_lambda = small_lambda #Add imaginary -1th trial big_lambda = np.vstack([np.zeros((1, n_stim, n_subjects)), big_lambda[:-1, ...]]).astype(float) # Add one trial of zeros to the start, remove the last trial small_lambda = big_lambda stimuli_shown = np.vstack([np.zeros((1, n_stim, n_subjects)), stimuli_shown]) # Add one trial of zeros to the start, DO NOT remove the last trial - this is needed for prediction stimulus_type = np.ones(n_stim) #Convert task outcomes to tensors big_lambda = T.as_tensor_variable(big_lambda.astype(float)) small_lambda = T.as_tensor_variable(small_lambda.astype(float)) stimuli_shown = T.as_tensor_variable(stimuli_shown) training_or_test = T.as_tensor_variable(training_or_test) stimuli_shown_sim = stimuli_shown.copy() big_lambda_sim = big_lambda.copy() small_lambda_sim = small_lambda.copy() training_or_test_sim = training_or_test.copy()
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Run Fake Data Simulation
#Run the loop output, updates = scan(fn=learning_function, sequences=[{'input': stimuli_shown_sim[:-1, ...]}, {'input': big_lambda_sim}, {'input': small_lambda_sim}, {'input': training_or_test}], outputs_info=[v_excitatory, v_inhibitory], non_sequences = [stimulus_type, α_subject_sim]) #Get model output V_out, Vbar_out = [i.eval() for i in output] estimated_overall_R = (V_out * stimuli_shown_sim[1:, ...]).sum(axis=1) - (Vbar_out * stimuli_shown_sim[1:, ...]).sum(axis=1) overall_R_sim = estimated_overall_R.eval()
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Check parameter recovery
n_subjects = len(data['ID'].unique()) #Initial values R = np.zeros((n_stim, n_subjects)) #US values small_lambda = data.pivot(index='trialseq', values='US', columns='ID').values[:, np.newaxis, :].repeat(n_stim, axis=1).astype(float) stim_data = [] for sub in data['ID'].unique(): stim_data.append(data.loc[data['ID'] == sub, ['Dcs', 'Ecs', 'F', 'S', 'G', 'H', 'Mcs', 'N', 'Ucs']].values) stimuli_shown = np.dstack(stim_data) big_lambda = small_lambda #Add imaginary -1th trial big_lambda = np.vstack([np.zeros((1, n_stim, n_subjects)), big_lambda[:-1, ...]]).astype(float) # Add one trial of zeros to the start, remove the last trial small_lambda = np.vstack([np.zeros((1, n_stim, n_subjects)), small_lambda[:-1, ...]]).astype(float) # Add one trial of zeros to the start, remove the last trial stimuli_shown = np.vstack([np.zeros((1, n_stim, n_subjects)), stimuli_shown]) # Add one trial of zeros to the start, DO NOT remove the last trial - this is needed for prediction #Convert task outcomes to tensors big_lambda = T.as_tensor_variable(big_lambda.astype(float)) small_lambda = T.as_tensor_variable(small_lambda.astype(float)) stimuli_shown = T.as_tensor_variable(stimuli_shown) stimulus_type = np.ones(n_stim) with pm.Model() as model: # Learning rate lies between 0 and 1 α_mean = pm.Normal('α_mean', 0.5, 10) α_sd = pm.HalfCauchy('α_sd', 10) BoundedNormal = pm.Bound(pm.Normal, lower=0, upper=1) α_subject = BoundedNormal('α', mu=α_mean, sd=α_sd, shape=(n_subjects,)) # Run the loop output, updates = scan(fn=learning_function, sequences=[{'input': stimuli_shown[:-1, ...]}, {'input': big_lambda}, {'input': small_lambda}, {'input': training_or_test}], outputs_info=[v_excitatory, v_inhibitory], non_sequences=[stimulus_type, α_subject]) # Get model output V, Vbar = output # # Single R value estimated_overall_R = ((V * stimuli_shown[1:, ...]).sum(axis=1) - (Vbar * stimuli_shown[1:, ...]).sum(axis=1)) # This allows us to output the estimated R estimated_overall_R = pm.Deterministic('estimated_overall_R', estimated_overall_R) V = pm.Deterministic('estimated_V', V) Vbar = pm.Deterministic('estimated_Vbar', Vbar) # Reshape output of the model and get categorical likelihood sigma = pm.HalfCauchy('sigma', 0.5) likelihood = pm.Normal('likelihood', mu=estimated_overall_R, sigma=sigma, observed=pd.DataFrame(overall_R_sim.squeeze()))
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Fit the Model Variational Inference
from pymc3.variational.callbacks import CheckParametersConvergence with model: approx = pm.fit(method='advi', n=40000, callbacks=[CheckParametersConvergence()]) trace = approx.sample(1000) alpha_output = pm.summary(trace, kind='stats', varnames=[i for i in model.named_vars if 'α' in i and not i in model.deterministics and not 'log' in i and not 'interval' in i]) recovered_data_var = {'Simulated_α': α_subject_sim, 'Recovered_α': trace['α'].mean(axis=0)} recovered_data_var = pd.DataFrame(recovered_data_var) recovered_data_var.to_csv(os.path.join('../output/',r'2nd POS - Direct Associations Simulated vs Recovered.csv')) f, ax = plt.subplots(1, 1, sharex = True, sharey = True, figsize=(3, 2.5)) f.suptitle('Simulated vs Recovered α Parameters', y=1.02, fontsize = 16) f.text(.5, -.02, 'Simulated α', va='center', ha='center', fontsize = 16) f.text(-.02, .5, 'Recovered α', va='center', ha='center', fontsize = 16, rotation=90) sns.regplot(α_subject_sim, trace['α'].mean(axis=0), label='α_subject', color = 'black') ax.set_title('Direct Associations') plt.setp(ax, xticks=[0, .2, .4, .6, .8, 1], yticks=[0, .2, .4, .6, .8, 1]) plt.tight_layout() plt.savefig(os.path.join('../output/',r'2nd POS - Direct Associations Simulated vs Recovered.svg'), bbox_inches='tight')
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Fit the Model to Real Data
n_subjects = len(data['ID'].unique()) # Initial values R = np.zeros((n_stim, n_subjects)) # Value estimate overall_R = np.zeros((1, n_subjects)) v_excitatory = np.zeros((n_stim, n_subjects)) v_inhibitory = np.zeros((n_stim, n_subjects)) # US values small_lambda = data.pivot(index='trialseq', values='US', columns='ID').values[:, np.newaxis, :].repeat(n_stim, axis=1) stim_data = [] for sub in data['ID'].unique(): stim_data.append(data.loc[data['ID'] == sub, ['Dcs', 'Ecs', 'F', 'S', 'G', 'H', 'Mcs', 'N', 'Ucs']].values) stimuli_shown = np.dstack(stim_data) big_lambda = small_lambda # Add imaginary -1th trial big_lambda = np.vstack([np.zeros((1, n_stim, n_subjects)), big_lambda[:-1, ...]]) # Add one trial of zeros to the start, remove the last trial small_lambda = np.vstack([np.zeros((1, n_stim, n_subjects)), small_lambda[:-1, ...]]) # Add one trial of zeros to the start, remove the last trial stimuli_shown = np.vstack([np.zeros((1, n_stim, n_subjects)), stimuli_shown]) # Add one trial of zeros to the start, DO NOT remove the last trial - this is needed for prediction stimulus_type = np.ones(n_stim) # Convert task outcomes to tensors big_lambda = T.as_tensor_variable(big_lambda) small_lambda = T.as_tensor_variable(small_lambda) stimuli_shown = T.as_tensor_variable(stimuli_shown) with pm.Model() as model: # Learning rate lies between 0 and 1 so we use a beta distribution α_mean = pm.Normal('α_mean', 0.5, 10) α_sd = pm.HalfCauchy('α_sd', 10) BoundedNormal = pm.Bound(pm.Normal, lower=0, upper=1) α_subject = BoundedNormal('α', mu=α_mean, sd=α_sd, shape=(n_subjects,)) # Run the loop output, updates = scan(fn=learning_function, sequences=[{'input': stimuli_shown[:-1, ...]}, {'input': big_lambda}, {'input': small_lambda}, {'input': training_or_test}], outputs_info=[v_excitatory, v_inhibitory], non_sequences=[stimulus_type, α_subject]) # Get model output V, Vbar = output estimated_overall_R = ((V * stimuli_shown[1:, ...]).sum(axis=1) - (Vbar * stimuli_shown[1:, ...]).sum(axis=1)) # This allows us to output the estimated R estimated_overall_R = pm.Deterministic('estimated_overall_R', estimated_overall_R) V = pm.Deterministic('estimated_V', V) Vbar = pm.Deterministic('estimated_Vbar', Vbar) # Reshape output of the model and get categorical likelihood sigma = pm.HalfCauchy('sigma', 0.5) likelihood = pm.Normal('likelihood', mu=estimated_overall_R, sigma=sigma, observed=pd.DataFrame(observed_R.squeeze()))
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Variational Inference
from pymc3.variational.callbacks import CheckParametersConvergence with model: approx = pm.fit(method='advi', n=40000, callbacks=[CheckParametersConvergence()]) trace = approx.sample(1000) alpha_output = pm.summary(trace, kind='stats', varnames=[i for i in model.named_vars if 'α' in i and not i in model.deterministics and not 'log' in i and not 'interval' in i])
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Model Output
overall_R_mean = trace['estimated_overall_R'].mean(axis=0) overall_R_sd = trace['estimated_overall_R'].std(axis=0) sub_ids = data['ID'].unique() subs = [np.where(data['ID'].unique() == sub)[0][0] for sub in sub_ids] waic_output = pm.waic(trace) waic_output alpha_output.to_csv(os.path.join('../output/',r'2nd POS - Direct Associations Alpha Output.csv')) waic_output.to_csv(os.path.join('../output/',r'2nd POS - Direct Associations WAIC Output.csv')) f, ax = plt.subplots(23, 3, figsize=(36, 48), dpi = 100) overall_R_mean = trace['estimated_overall_R'].mean(axis=0) overall_R_sd = trace['estimated_overall_R'].std(axis=0) sub_ids = data['ID'].unique() subs = [np.where(data['ID'].unique() == sub)[0][0] for sub in sub_ids] for n, sub in enumerate(subs): ax[n % 23, int(n / 23)].fill_between(range(overall_R_mean.shape[0]), overall_R_mean[:, sub] - overall_R_sd[:, sub], overall_R_mean[:, sub] + overall_R_sd[:, sub], alpha=0.3) ax[n % 23, int(n / 23)].plot(overall_R_mean[:, sub]) ax[n % 23, int(n / 23)].plot(observed_R.squeeze()[:, sub], color='orange', linestyle='-')#participant's real data ax[n % 23, int(n / 23)].plot(overall_R_sim.squeeze()[:, sub], color='silver', linestyle=':', alpha = .7)#Alpha = 1; this is the correct answer if a person learned perfectly if n == 0: ax[n % 23, int(n / 23)].set_ylabel('Mean (+/-SD) overall R') ax[n % 23, int(n / 23)].set_ylabel('Responding (R)') ax[n % 23, int(n / 23)].set_xlabel('Trials') ax[n % 23, int(n / 23)].set_title('Sub {0}'.format(sub_ids[n])) plt.tight_layout() plt.savefig(os.path.join('../output/',r'2nd POS - Direct Associations Individual Real and Estimated Responding.svg'), bbox_inches='tight') %load_ext watermark %watermark -v -p pytest,jupyterlab,numpy,pandas,theano,pymc3
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Experiment 02: Deformations Experiments ETH-05In this notebook, we are using the CLUST Dataset.The sequence used for this notebook is ETH-05.zip
import sys import random import os sys.path.append('../src') import warnings warnings.filterwarnings("ignore") from PIL import Image from utils.compute_metrics import get_metrics, get_majority_vote,log_test_metrics from utils.split import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.svm import LinearSVC from sklearn.svm import SVC from sklearn.model_selection import GroupKFold from tqdm import tqdm from pprint import pprint import torch from itertools import product import pickle import pandas as pd import numpy as np import mlflow import matplotlib.pyplot as plt #from kymatio.numpy import Scattering2D import torch from tqdm import tqdm from kymatio.torch import Scattering2D
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
1. Visualize Sequence of USWe are visualizing the first images from the sequence ETH-01-1 that contains 3652 US images.
directory=os.listdir('../data/02_interim/Data5') directory.sort() # settings h, w = 15, 10 # for raster image nrows, ncols = 3, 4 # array of sub-plots figsize = [15, 8] # figure size, inches # prep (x,y) for extra plotting on selected sub-plots xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(np.sin(xs)) # absolute of sine # create figure (fig), and array of axes (ax) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize) imgs= directory[0:100] count = 0 # plot simple raster image on each sub-plot for i, axi in enumerate(ax.flat): # i runs from 0 to (nrows*ncols-1) # axi is equivalent with ax[rowid][colid] img =plt.imread("../data/02_interim/Data5/" + imgs[i]) axi.imshow(img, cmap='gray') axi.axis('off') # get indices of row/column # write row/col indices as axes' title for identification #axi.set_title(df_labels['Finding Labels'][row[count]], size=20) count = count +1 plt.tight_layout(True) plt.savefig('samples_xray') plt.show()
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
2. Create Dataset
%%time ll_imgstemp = [plt.imread("../data/02_interim/Data5/" + dir) for dir in directory[:5]] %%time ll_imgs = [np.array(Image.open("../data/02_interim/Data5/" + dir).resize(size=(98, 114)), dtype='float32') for dir in directory] %%time ll_imgs2 = [img.reshape(1,img.shape[0],img.shape[1]) for img in ll_imgs] # dataset = pd.DataFrame([torch.tensor(ll_imgs).view(1,M,N).type(torch.float32)], columns='img') dataset = pd.DataFrame({'img':ll_imgs2}).reset_index().rename(columns={'index':'order'}) # dataset = pd.DataFrame({'img':ll_imgs}).reset_index().rename(columns={'index':'order'}) dataset
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
3. Extract Scattering Features
M,N = dataset['img'].iloc[0].shape[1], dataset['img'].iloc[0].shape[2] print(M,N) # Set the parameters of the scattering transform. J = 3 # Generate a sample signal. scattering = Scattering2D(J, (M, N)) data = np.concatenate(dataset['img'],axis=0) data = torch.from_numpy(data) use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") if use_cuda: scattering = scattering.cuda() data = data.to(device) init = 0 final =0 count =0 first_loop = True for i in tqdm(range (0,len(data), 13)): init= i final = i + 11 if first_loop: scattering_features = scattering(data[init: final]) first_loop=False torch.cuda.empty_cache() else: scattering_features = torch.cat((scattering_features,scattering(data[init: final]) )) torch.cuda.empty_cache() # break # save scattering features # with open('../data/03_features/scattering_features_deformation.pickle', 'wb') as handle: # pickle.dump(scattering_features, handle, protocol=pickle.HIGHEST_PROTOCOL) # save scattering features with open('../data/03_features/scattering_features_deformation5.pickle', 'wb') as handle: pickle.dump(scattering_features, handle, protocol=pickle.HIGHEST_PROTOCOL) # save scattering features with open('../data/03_features/dataset_deformation5.pickle', 'wb') as handle: pickle.dump(dataset, handle, protocol=pickle.HIGHEST_PROTOCOL)
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
4. Extract PCA Components
with open('../data/03_features/scattering_features_deformation5.pickle', 'rb') as handle: scattering_features = pickle.load(handle) with open('../data/03_features/dataset_deformation5.pickle', 'rb') as handle: dataset = pickle.load(handle) sc_features = scattering_features.view(scattering_features.shape[0], scattering_features.shape[1] * scattering_features.shape[2] * scattering_features.shape[3]) X = sc_features.cpu().numpy() #standardize scaler = StandardScaler() X = scaler.fit_transform(X) pca = PCA(n_components=50) X = pca.fit_transform(X) plt.plot(np.insert(pca.explained_variance_ratio_.cumsum(),0,0),marker='o') plt.xlabel('Number of components') plt.ylabel('Cumulative explained variance') plt.show() print(pca.explained_variance_ratio_.cumsum()) df = pd.DataFrame(X) df['order'] = dataset['order'] #df.corr() import seaborn as sns; sns.set_theme() plt.figure(figsize = (10,10)) vec1 = df.corr()['order'].values vec2 = vec1.reshape(vec1.shape[0], 1) sns.heatmap(vec2) plt.show() def visualize_corr_pca_order(pca_c, df): plt.figure(figsize=(16,8)) x= df['order'] y= df[pca_c] plt.scatter(x,y) m, b = np.polyfit(x, y, 1) plt.plot(x, m*x + b, color='red') plt.ylabel('PCA Component '+ str(pca_c+1)) plt.xlabel('Frame Order') plt.show() visualize_corr_pca_order(3, df) print('Correlation between order and Pca component 2:', df.corr()['order'][1]) def visualize_sub_plot(pca_c, df, x_num= 3, y_num =3): fig, axs = plt.subplots(x_num, y_num, figsize=(15,10)) size = len(df) plot_num = x_num * y_num frame = int(size/plot_num) start = 0 for i in range (x_num): for j in range (y_num): final = start + frame x= df['order'].iloc[start:final] y= df[pca_c].iloc[start:final] m, b = np.polyfit(x, y, 1) axs[i, j].set_ylabel('PCA Component '+ str(pca_c+1)) axs[i, j].set_xlabel('Frame Order') axs[i, j].plot(x, m*x + b, color='red') axs[i, j].scatter(x,y) start = start + frame plt.show() visualize_sub_plot(3, df, x_num= 3, y_num =3)
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
5. Isometric Mapping Correlation with Order
with open('../data/03_features/scattering_features_deformation5.pickle', 'rb') as handle: scattering_features = pickle.load(handle) with open('../data/03_features/dataset_deformation5.pickle', 'rb') as handle: dataset = pickle.load(handle) sc_features = scattering_features.view(scattering_features.shape[0], scattering_features.shape[1] * scattering_features.shape[2] * scattering_features.shape[3]) X = sc_features.cpu().numpy() #standardize scaler = StandardScaler() X = scaler.fit_transform(X) from sklearn.manifold import Isomap embedding = Isomap(n_components=2) X_transformed = embedding.fit_transform(X[0:500]) df = pd.DataFrame(X_transformed) df['order'] = dataset['order'] df.corr() from sklearn.manifold import Isomap def visualize_sub_plot_iso(pca_c, x_num= 3, y_num =3): fig, axs = plt.subplots(x_num, y_num, figsize=(15,13)) size =len(sc_features ) plot_num = x_num * y_num frame = int(size/plot_num) start = 0 for i in tqdm(range (x_num)): for j in tqdm(range (y_num)): final = start + frame embedding = Isomap(n_components=2) X_transformed = embedding.fit_transform(X[start:final]) df = pd.DataFrame(X_transformed) df['order'] = dataset['order'].iloc[start:final].values x= df['order'] y= df[pca_c] start = start + frame #m, b = np.polyfit(x, y, 1) axs[i, j].set_ylabel('Iso Map Dimension '+ str(pca_c+1)) axs[i, j].set_xlabel('Frame Order') #axs[i, j].plot(x, m*x + b, color='red') axs[i, j].scatter(x,y) plt.show() visualize_sub_plot_iso(0, x_num= 3, y_num =3) #print('Correlation between order and Pca component 2:', df.corr()['order'][1])
0%| | 0/3 [00:00<?, ?it/s] 0%| | 0/3 [00:00<?, ?it/s] 33%|███▎ | 1/3 [00:11<00:23, 11.70s/it] 67%|██████▋ | 2/3 [00:23<00:11, 11.71s/it] 100%|██████████| 3/3 [00:35<00:00, 11.71s/it] 33%|███▎ | 1/3 [00:35<01:10, 35.12s/it] 0%| | 0/3 [00:00<?, ?it/s] 33%|███▎ | 1/3 [00:11<00:23, 11.56s/it] 67%|██████▋ | 2/3 [00:23<00:11, 11.58s/it] 100%|██████████| 3/3 [00:35<00:00, 11.82s/it] 67%|██████▋ | 2/3 [01:10<00:35, 35.22s/it] 0%| | 0/3 [00:00<?, ?it/s] 33%|███▎ | 1/3 [00:11<00:23, 11.59s/it] 67%|██████▋ | 2/3 [00:23<00:11, 11.62s/it] 100%|██████████| 3/3 [00:35<00:00, 11.67s/it] 100%|██████████| 3/3 [01:45<00:00, 35.20s/it]
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
Regression
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from collections import OrderedDict import time from sklearn.metrics import mean_squared_error,roc_auc_score,mean_absolute_error,log_loss import sys from gammli import GAMMLI from gammli.dataReader import data_initialize from gammli.utils import local_visualize from gammli.utils import global_visualize_density from gammli.utils import feature_importance_visualize from gammli.utils import plot_trajectory from gammli.utils import plot_regularization import tensorflow as tf random_state = 0 data= pd.read_csv('data/simulation/sim_0.9_new.csv') task_type = "Regression" meta_info = OrderedDict() meta_info['x1']={'type': 'continues','source':'user'} meta_info['x2']={'type': 'continues','source':'user'} meta_info['x3']={'type': 'continues','source':'user'} meta_info['x4']={'type': 'continues','source':'user'} meta_info['x5']={'type': 'continues','source':'user'} meta_info['z1']={'type': 'continues','source':'item'} meta_info['z2']={'type': 'continues','source':'item'} meta_info['z3']={'type': 'continues','source':'item'} meta_info['z4']={'type': 'continues','source':'item'} meta_info['z5']={'type': 'continues','source':'item'} meta_info['user_id']={"type":"id",'source':'user'} meta_info['item_id']={"type":"id",'source':'item'} meta_info['target']={"type":"target",'source':''} random_state = 0 train , test = train_test_split(data,test_size=0.2 ,random_state=0) tr_x, tr_Xi, tr_y, tr_idx, te_x, te_Xi, te_y, val_x, val_Xi, val_y, val_idx, meta_info, model_info,sy,sy_t = data_initialize(train,test,meta_info,task_type ,'warm', random_state, True) model = GAMMLI(wc='warm',model_info=model_info, meta_info=meta_info, subnet_arch=[20, 10],interact_arch=[20, 10],activation_func=tf.tanh, batch_size=min(500, int(0.2*tr_x.shape[0])), lr_bp=0.001, auto_tune=False, interaction_epochs=1000,main_effect_epochs=1000,tuning_epochs=200,loss_threshold_main=0.01,loss_threshold_inter=0.1, verbose=True, early_stop_thres=20,interact_num=10,n_power_iterations=5,n_oversamples=10, u_group_num=10, i_group_num=10, reg_clarity=10, lambda_=5, mf_training_iters=200,change_mode=False,convergence_threshold=0.0001,max_rank=3,interaction_restrict='intra', si_approach ='als') model.fit(tr_x, val_x, tr_y, val_y, tr_Xi, val_Xi, tr_idx, val_idx) simu_dir = 'result' data_dict_logs = model.final_gam_model.summary_logs(save_dict=False) data_dict_logs.update({"err_train_mf":model.final_mf_model.mf_mae, "err_val_mf":model.final_mf_model.mf_valmae}) plot_trajectory(data_dict_logs, folder=simu_dir, name="s1_traj_plot", log_scale=True, save_png=False, save_eps=True) plot_regularization(data_dict_logs, folder=simu_dir, name="s1_regu_plot", log_scale=True, save_png=False, save_eps=False) global_visualize_density(data_dict, save_png=True, folder=simu_dir, name='s1_global')
_____no_output_____
MIT
examples/simulation_demo.ipynb
SelfExplainML/GAMMLI
Image Captioning with RNNsIn this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images. Install h5pyThe COCO dataset we will be using is stored in HDF5 format. To load HDF5 files, we will need to install the `h5py` Python package. From the command line, run: `pip install h5py` If you receive a permissions error, you may need to run the command as root: ```sudo pip install h5py```You can also run commands directly from the Jupyter notebook by prefixing the command with the "!" character:
!pip install h5py # As usual, a bit of setup import time, os, json import numpy as np import matplotlib.pyplot as plt from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.rnn_layers import * from cs231n.captioning_solver import CaptioningSolver from cs231n.classifiers.rnn import CaptioningRNN from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions from cs231n.image_utils import image_from_url %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Microsoft COCOFor this exercise we will use the 2014 release of the [Microsoft COCO dataset](http://mscoco.org/) which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical Turk.**You should have already downloaded the data by changing to the `cs231n/datasets` directory and running the script `get_assignment3_data.sh`. If you haven't yet done so, run that script now. Warning: the COCO data download is ~1GB.**We have preprocessed the data and extracted features for you already. For all images we have extracted features from the fc7 layer of the VGG-16 network pretrained on ImageNet; these features are stored in the files `train2014_vgg16_fc7.h5` and `val2014_vgg16_fc7.h5` respectively. To cut down on processing time and memory requirements, we have reduced the dimensionality of the features from 4096 to 512; these features can be found in the files `train2014_vgg16_fc7_pca.h5` and `val2014_vgg16_fc7_pca.h5`.The raw images take up a lot of space (nearly 20GB) so we have not included them in the download. However all images are taken from Flickr, and URLs of the training and validation images are stored in the files `train2014_urls.txt` and `val2014_urls.txt` respectively. This allows you to download images on the fly for visualization. Since images are downloaded on-the-fly, **you must be connected to the internet to view images**.Dealing with strings is inefficient, so we will work with an encoded version of the captions. Each word is assigned an integer ID, allowing us to represent a caption by a sequence of integers. The mapping between integer IDs and words is in the file `coco2014_vocab.json`, and you can use the function `decode_captions` from the file `cs231n/coco_utils.py` to convert numpy arrays of integer IDs back into strings.There are a couple special tokens that we add to the vocabulary. We prepend a special `` token and append an `` token to the beginning and end of each caption respectively. Rare words are replaced with a special `` token (for "unknown"). In addition, since we want to train with minibatches containing captions of different lengths, we pad short captions with a special `` token after the `` token and don't compute loss or gradient for `` tokens. Since they are a bit of a pain, we have taken care of all implementation details around special tokens for you.You can load all of the MS-COCO data (captions, features, URLs, and vocabulary) using the `load_coco_data` function from the file `cs231n/coco_utils.py`. Run the following cell to do so:
# Load COCO data from disk; this returns a dictionary # We'll work with dimensionality-reduced features for this notebook, but feel # free to experiment with the original features by changing the flag below. data = load_coco_data(pca_features=True) # Print out all the keys and values from the data dictionary for k, v in data.items(): if type(v) == np.ndarray: print(k, type(v), v.shape, v.dtype) else: print(k, type(v), len(v))
base dir /home/purewhite/workspace/CS231n-2020-Assignment/assignment3/cs231n/datasets/coco_captioning train_captions <class 'numpy.ndarray'> (400135, 17) int32 train_image_idxs <class 'numpy.ndarray'> (400135,) int32 val_captions <class 'numpy.ndarray'> (195954, 17) int32 val_image_idxs <class 'numpy.ndarray'> (195954,) int32 train_features <class 'numpy.ndarray'> (82783, 512) float32 val_features <class 'numpy.ndarray'> (40504, 512) float32 idx_to_word <class 'list'> 1004 word_to_idx <class 'dict'> 1004 train_urls <class 'numpy.ndarray'> (82783,) <U63 val_urls <class 'numpy.ndarray'> (40504,) <U63
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Look at the dataIt is always a good idea to look at examples from the dataset before working with it.You can use the `sample_coco_minibatch` function from the file `cs231n/coco_utils.py` to sample minibatches of data from the data structure returned from `load_coco_data`. Run the following to sample a small minibatch of training data and show the images and their captions. Running it multiple times and looking at the results helps you to get a sense of the dataset.Note that we decode the captions using the `decode_captions` function and that we download the images on-the-fly using their Flickr URL, so **you must be connected to the internet to view images**.
# Sample a minibatch and show the images and captions batch_size = 3 captions, features, urls = sample_coco_minibatch(data, batch_size=batch_size) for i, (caption, url) in enumerate(zip(captions, urls)): plt.imshow(image_from_url(url)) plt.axis('off') caption_str = decode_captions(caption, data['idx_to_word']) plt.title(caption_str) plt.show()
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Recurrent Neural NetworksAs discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file `cs231n/rnn_layers.py` contains implementations of different layer types that are needed for recurrent neural networks, and the file `cs231n/classifiers/rnn.py` uses these layers to implement an image captioning model.We will first implement different types of RNN layers in `cs231n/rnn_layers.py`. Vanilla RNN: step forwardOpen the file `cs231n/rnn_layers.py`. This file implements the forward and backward passes for different types of layers that are commonly used in recurrent neural networks.First implement the function `rnn_step_forward` which implements the forward pass for a single timestep of a vanilla recurrent neural network. After doing so run the following to check your implementation. You should see errors on the order of e-8 or less.
N, D, H = 3, 10, 4 x = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D) prev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H) Wx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H) Wh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H) b = np.linspace(-0.2, 0.4, num=H) next_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b) expected_next_h = np.asarray([ [-0.58172089, -0.50182032, -0.41232771, -0.31410098], [ 0.66854692, 0.79562378, 0.87755553, 0.92795967], [ 0.97934501, 0.99144213, 0.99646691, 0.99854353]]) print('next_h error: ', rel_error(expected_next_h, next_h))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: step backwardIn the file `cs231n/rnn_layers.py` implement the `rnn_step_backward` function. After doing so run the following to numerically gradient check your implementation. You should see errors on the order of `e-8` or less.
from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward np.random.seed(231) N, D, H = 4, 5, 6 x = np.random.randn(N, D) h = np.random.randn(N, H) Wx = np.random.randn(D, H) Wh = np.random.randn(H, H) b = np.random.randn(H) out, cache = rnn_step_forward(x, h, Wx, Wh, b) dnext_h = np.random.randn(*out.shape) fx = lambda x: rnn_step_forward(x, h, Wx, Wh, b)[0] fh = lambda prev_h: rnn_step_forward(x, h, Wx, Wh, b)[0] fWx = lambda Wx: rnn_step_forward(x, h, Wx, Wh, b)[0] fWh = lambda Wh: rnn_step_forward(x, h, Wx, Wh, b)[0] fb = lambda b: rnn_step_forward(x, h, Wx, Wh, b)[0] dx_num = eval_numerical_gradient_array(fx, x, dnext_h) dprev_h_num = eval_numerical_gradient_array(fh, h, dnext_h) dWx_num = eval_numerical_gradient_array(fWx, Wx, dnext_h) dWh_num = eval_numerical_gradient_array(fWh, Wh, dnext_h) db_num = eval_numerical_gradient_array(fb, b, dnext_h) dx, dprev_h, dWx, dWh, db = rnn_step_backward(dnext_h, cache) print('dx error: ', rel_error(dx_num, dx)) print('dprev_h error: ', rel_error(dprev_h_num, dprev_h)) print('dWx error: ', rel_error(dWx_num, dWx)) print('dWh error: ', rel_error(dWh_num, dWh)) print('db error: ', rel_error(db_num, db))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: forwardNow that you have implemented the forward and backward passes for a single timestep of a vanilla RNN, you will combine these pieces to implement a RNN that processes an entire sequence of data.In the file `cs231n/rnn_layers.py`, implement the function `rnn_forward`. This should be implemented using the `rnn_step_forward` function that you defined above. After doing so run the following to check your implementation. You should see errors on the order of `e-7` or less.
N, T, D, H = 2, 3, 4, 5 x = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D) h0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H) Wx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H) Wh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H) b = np.linspace(-0.7, 0.1, num=H) h, _ = rnn_forward(x, h0, Wx, Wh, b) expected_h = np.asarray([ [ [-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251], [-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316], [-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525], ], [ [-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671], [-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768], [-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043]]]) print('h error: ', rel_error(expected_h, h))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: backwardIn the file `cs231n/rnn_layers.py`, implement the backward pass for a vanilla RNN in the function `rnn_backward`. This should run back-propagation over the entire sequence, making calls to the `rnn_step_backward` function that you defined earlier. You should see errors on the order of e-6 or less.
np.random.seed(231) N, D, T, H = 2, 3, 10, 5 x = np.random.randn(N, T, D) h0 = np.random.randn(N, H) Wx = np.random.randn(D, H) Wh = np.random.randn(H, H) b = np.random.randn(H) out, cache = rnn_forward(x, h0, Wx, Wh, b) dout = np.random.randn(*out.shape) dx, dh0, dWx, dWh, db = rnn_backward(dout, cache) fx = lambda x: rnn_forward(x, h0, Wx, Wh, b)[0] fh0 = lambda h0: rnn_forward(x, h0, Wx, Wh, b)[0] fWx = lambda Wx: rnn_forward(x, h0, Wx, Wh, b)[0] fWh = lambda Wh: rnn_forward(x, h0, Wx, Wh, b)[0] fb = lambda b: rnn_forward(x, h0, Wx, Wh, b)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) dh0_num = eval_numerical_gradient_array(fh0, h0, dout) dWx_num = eval_numerical_gradient_array(fWx, Wx, dout) dWh_num = eval_numerical_gradient_array(fWh, Wh, dout) db_num = eval_numerical_gradient_array(fb, b, dout) print('dx error: ', rel_error(dx_num, dx)) print('dh0 error: ', rel_error(dh0_num, dh0)) print('dWx error: ', rel_error(dWx_num, dWx)) print('dWh error: ', rel_error(dWh_num, dWh)) print('db error: ', rel_error(db_num, db))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Word embedding: forwardIn deep learning systems, we commonly represent words using vectors. Each word of the vocabulary will be associated with a vector, and these vectors will be learned jointly with the rest of the system.In the file `cs231n/rnn_layers.py`, implement the function `word_embedding_forward` to convert words (represented by integers) into vectors. Run the following to check your implementation. You should see an error on the order of `e-8` or less.
N, T, V, D = 2, 4, 5, 3 x = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]]) W = np.linspace(0, 1, num=V*D).reshape(V, D) out, _ = word_embedding_forward(x, W) expected_out = np.asarray([ [[ 0., 0.07142857, 0.14285714], [ 0.64285714, 0.71428571, 0.78571429], [ 0.21428571, 0.28571429, 0.35714286], [ 0.42857143, 0.5, 0.57142857]], [[ 0.42857143, 0.5, 0.57142857], [ 0.21428571, 0.28571429, 0.35714286], [ 0., 0.07142857, 0.14285714], [ 0.64285714, 0.71428571, 0.78571429]]]) print('out error: ', rel_error(expected_out, out))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Word embedding: backwardImplement the backward pass for the word embedding function in the function `word_embedding_backward`. After doing so run the following to numerically gradient check your implementation. You should see an error on the order of `e-11` or less.
np.random.seed(231) N, T, V, D = 50, 3, 5, 6 x = np.random.randint(V, size=(N, T)) W = np.random.randn(V, D) out, cache = word_embedding_forward(x, W) dout = np.random.randn(*out.shape) dW = word_embedding_backward(dout, cache) f = lambda W: word_embedding_forward(x, W)[0] dW_num = eval_numerical_gradient_array(f, W, dout) print('dW error: ', rel_error(dW, dW_num))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Temporal Affine layerAt every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the `temporal_affine_forward` and `temporal_affine_backward` functions in the file `cs231n/rnn_layers.py`. Run the following to perform numeric gradient checking on the implementation. You should see errors on the order of e-9 or less.
np.random.seed(231) # Gradient check for temporal affine layer N, T, D, M = 2, 3, 4, 5 x = np.random.randn(N, T, D) w = np.random.randn(D, M) b = np.random.randn(M) out, cache = temporal_affine_forward(x, w, b) dout = np.random.randn(*out.shape) fx = lambda x: temporal_affine_forward(x, w, b)[0] fw = lambda w: temporal_affine_forward(x, w, b)[0] fb = lambda b: temporal_affine_forward(x, w, b)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) dw_num = eval_numerical_gradient_array(fw, w, dout) db_num = eval_numerical_gradient_array(fb, b, dout) dx, dw, db = temporal_affine_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Temporal Softmax lossIn an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.However there is one wrinkle: since we operate over minibatches and different captions may have different lengths, we append `` tokens to the end of each caption so they all have the same length. We don't want these `` tokens to count toward the loss or gradient, so in addition to scores and ground-truth labels our loss function also accepts a `mask` array that tells it which elements of the scores count towards the loss.Since this is very similar to the softmax loss function you implemented in assignment 1, we have implemented this loss function for you; look at the `temporal_softmax_loss` function in the file `cs231n/rnn_layers.py`.Run the following cell to sanity check the loss and perform numeric gradient checking on the function. You should see an error for dx on the order of e-7 or less.
# Sanity check for temporal softmax loss from cs231n.rnn_layers import temporal_softmax_loss N, T, V = 100, 1, 10 def check_loss(N, T, V, p): x = 0.001 * np.random.randn(N, T, V) y = np.random.randint(V, size=(N, T)) mask = np.random.rand(N, T) <= p print(temporal_softmax_loss(x, y, mask)[0]) check_loss(100, 1, 10, 1.0) # Should be about 2.3 check_loss(100, 10, 10, 1.0) # Should be about 23 check_loss(5000, 10, 10, 0.1) # Should be within 2.2-2.4 # Gradient check for temporal softmax loss N, T, V = 7, 8, 9 x = np.random.randn(N, T, V) y = np.random.randint(V, size=(N, T)) mask = (np.random.rand(N, T) > 0.5) loss, dx = temporal_softmax_loss(x, y, mask, verbose=False) dx_num = eval_numerical_gradient(lambda x: temporal_softmax_loss(x, y, mask)[0], x, verbose=False) print('dx error: ', rel_error(dx, dx_num))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
RNN for image captioningNow that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file `cs231n/classifiers/rnn.py` and look at the `CaptioningRNN` class.Implement the forward and backward pass of the model in the `loss` function. For now you only need to implement the case where `cell_type='rnn'` for vanialla RNNs; you will implement the LSTM case later. After doing so, run the following to check your forward pass using a small test case; you should see error on the order of `e-10` or less.
N, D, W, H = 10, 20, 30, 40 word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3} V = len(word_to_idx) T = 13 model = CaptioningRNN(word_to_idx, input_dim=D, wordvec_dim=W, hidden_dim=H, cell_type='rnn', dtype=np.float64) # Set all model parameters to fixed values for k, v in model.params.items(): model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape) features = np.linspace(-1.5, 0.3, num=(N * D)).reshape(N, D) captions = (np.arange(N * T) % V).reshape(N, T) loss, grads = model.loss(features, captions) expected_loss = 9.83235591003 print('loss: ', loss) print('expected loss: ', expected_loss) print('difference: ', abs(loss - expected_loss))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Run the following cell to perform numeric gradient checking on the `CaptioningRNN` class; you should see errors around the order of `e-6` or less.
np.random.seed(231) batch_size = 2 timesteps = 3 input_dim = 4 wordvec_dim = 5 hidden_dim = 6 word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3} vocab_size = len(word_to_idx) captions = np.random.randint(vocab_size, size=(batch_size, timesteps)) features = np.random.randn(batch_size, input_dim) model = CaptioningRNN(word_to_idx, input_dim=input_dim, wordvec_dim=wordvec_dim, hidden_dim=hidden_dim, cell_type='rnn', dtype=np.float64, ) loss, grads = model.loss(features, captions) for param_name in sorted(grads): f = lambda _: model.loss(features, captions)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s relative error: %e' % (param_name, e))
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Overfit small dataSimilar to the `Solver` class that we used to train image classification models on the previous assignment, on this assignment we use a `CaptioningSolver` class to train image captioning models. Open the file `cs231n/captioning_solver.py` and read through the `CaptioningSolver` class; it should look very familiar.Once you have familiarized yourself with the API, run the following to make sure your model overfits a small sample of 100 training examples. You should see a final loss of less than 0.1.
np.random.seed(231) small_data = load_coco_data(max_train=50) small_rnn_model = CaptioningRNN( cell_type='rnn', word_to_idx=data['word_to_idx'], input_dim=data['train_features'].shape[1], hidden_dim=512, wordvec_dim=256, ) small_rnn_solver = CaptioningSolver(small_rnn_model, small_data, update_rule='adam', num_epochs=50, batch_size=25, optim_config={ 'learning_rate': 5e-3, }, lr_decay=0.95, verbose=True, print_every=10, ) small_rnn_solver.train() # Plot the training losses plt.plot(small_rnn_solver.loss_history) plt.xlabel('Iteration') plt.ylabel('Loss') plt.title('Training loss history') plt.show()
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Print final training loss. You should see a final loss of less than 0.1.
print('Final loss: ', small_rnn_solver.loss_history[-1])
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Test-time samplingUnlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption, so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the vocabulary at each timestep, and feed the sample as input to the RNN at the next timestep.In the file `cs231n/classifiers/rnn.py`, implement the `sample` method for test-time sampling. After doing so, run the following to sample from your overfitted model on both training and validation data. The samples on training data should be very good; the samples on validation data probably won't make sense.
for split in ['train', 'val']: minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2) gt_captions, features, urls = minibatch gt_captions = decode_captions(gt_captions, data['idx_to_word']) sample_captions = small_rnn_model.sample(features) sample_captions = decode_captions(sample_captions, data['idx_to_word']) for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls): plt.imshow(image_from_url(url)) plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption)) plt.axis('off') plt.show()
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
The noise scattering at a compressor inlet and outlet================================================== In this example we extract the scattering of noise at a compressor inlet and outlet. In addition to measuring the pressure with flush-mounted microphones, we will use the temperature, and flow velocity that was acquired during the measurement. The data comes from a study performed at the `Competence Center of Gas Exchange (CCGEx) `_\. ![](../../image/compressor.JPG) :width: 800 1. Initialization-----------------First, we import the packages needed for this example.
import numpy import matplotlib.pyplot as plt import acdecom
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
The compressor intake and outlet have a circular cross section of the radius 0.026 m and 0.028 m.The highest frequency of interest is 3200 Hz.
section = "circular" radius_intake = 0.026 # m radius_outlet = 0.028 # m f_max = 3200 # Hz
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
During the test, test ducts were mounted to the intake and outlet. Those ducts were equipped with three microphoneseach. The first microphone had a distance to the intake of 0.73 m and 1.17 m to the outlet.
distance_intake = 0.073 # m distance_outlet = 1.17 # m
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
To analyze the measurement data, we create objects for the intake and the outlet test pipes.
td_intake = acdecom.WaveGuide(dimensions=(radius_intake,), cross_section=section, f_max=f_max, damping="kirchoff", distance=distance_intake, flip_flow=True) td_outlet = acdecom.WaveGuide(dimensions=(radius_outlet,), cross_section=section, f_max=f_max, damping="kirchoff", distance=distance_outlet)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
NoteThe standard flow direction is in $P_+$ direction. Therefore, on the intake side, the Mach-number must be either set negative or the argument *flipFlow* must be set to *True*.2. Sensor Positions-------------------We define lists with microphone positions at the intake and outlet and assign them to the *WaveGuides*.
z_intake = [0, 0.043, 0.324] # m r_intake = [radius_intake, radius_intake, radius_intake] # m phi_intake = [0, 180, 0] # deg z_outlet = [0, 0.054, 0.284] # m r_outlet = [radius_outlet, radius_outlet, radius_outlet] # m phi_outlet = [0, 180, 0] # deg td_intake.set_microphone_positions(z_intake, r_intake, phi_intake, cylindrical_coordinates=True) td_outlet.set_microphone_positions(z_outlet, r_outlet, phi_outlet, cylindrical_coordinates=True)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
3. Decomposition----------------Next, we read the measurement data. The measurement must be pre-processed in a format that is understood by the*WaveGuide* object. This is generally a numpy.ndArray, wherein the columns contain the measurement data, suchas the measured frequency, the pressure values for that frequency, the bulk Mach-number, and the temperature.The rows can be different frequencies or different sound excitations (cases). In this example the measurement waspost-processed into the `turbo.txt `_file and can be loaded with the `numpy.loadtxt `_function.NoteThe pressure used for the decomposition must be pre-processed, for example to account for microphone calibration.
pressure = numpy.loadtxt("data/turbo.txt",dtype=complex, delimiter=",", skiprows=1)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
We review the file's header to understand how the data is stored in our input file.
with open("data/turbo.txt") as pressure_file: print(pressure_file.readline().split(","))
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
The Mach-numbers at the intake and outlet are stored in columns 0 and 1, the temperatures in columns 2 and 3,and the frequency in column 4. The intake microphones (1, 2, and 3) are in columns 5, 6, and 7. The outletmicrophones (3, 5, and 6) are in columns 8, 9, and 10. The case number is in the last column.
Machnumber_intake = 0 Machnumber_outlet= 1 temperature_intake = 2 temperature_outlet = 3 f = 4 mics_intake = [5, 6, 7] mics_outlet = [8, 9, 10] case = -1
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
Next, we decompose the sound-fields into the propagating modes. We decompose the sound-fields on the intakeand outlet side of the duct, using the two *WaveGuide* objects defined earlier.
decomp_intake, headers_intake = td_intake.decompose(pressure, f, mics_intake, temperature_col=temperature_intake, case_col=case, Mach_col=Machnumber_intake) decomp_outlet, headers_outlet = td_outlet.decompose(pressure, f, mics_outlet, temperature_col=temperature_outlet, case_col=case, Mach_col=Machnumber_outlet)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
.. note :: The decomposition may show warnings for ill-conditioned modal matrices. This typically happens for frequencies close to the cut-on of a mode. However, it can also indicate that the microphone array is unable to separate the modes. The condition number of the wave decomposition is stored in the data returned by :meth:`.WaveGuide.decompose` and should be checked in case a warning is triggered.4. Further Post-processing--------------------------We can print the *headersDS* to see the names of the columns of the arrays that store the decomposed sound fields.
print(headers_intake)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
We use that information to extract the modal data.
minusmodes = [1] # from headers_intake plusmodes = [0]
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
Furthermore, we acquire the unique decomposed frequency points.
frequs = numpy.abs(numpy.unique(decomp_intake[:, headers_intake.index("f")])) nof = frequs.shape[0]
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
For each of the frequencies, we can compute the scattering matrix by solving a linear system of equations$S = p_+ p_-^{-1}$\, where $S$ is the scattering matrix and $p_{\pm}$ are matrices containing theacoustic modes placed in rows and the different test cases placed in columns.NoteDetails for the computation of the Scattering Matrix and the procedure to measure the different test-cases can be found in `this study `_\.
S = numpy.zeros((2,2,nof),dtype = complex) for fIndx, f in enumerate(frequs): frequ_rows = numpy.where(decomp_intake[:, headers_intake.index("f")] == f) ppm_intake = decomp_intake[frequ_rows] ppm_outlet = decomp_outlet[frequ_rows] pp = numpy.concatenate((ppm_intake[:,plusmodes].T, ppm_outlet[:,plusmodes].T)) pm = numpy.concatenate((ppm_intake[:,minusmodes].T, ppm_outlet[:,minusmodes].T)) S[:,:,fIndx] = numpy.dot(pp,numpy.linalg.pinv(pm))
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
5. Plot-------Finally, we can plot the transmission and reflection coefficients at the intake and outlet.
plt.plot(frequs, numpy.abs(S[0, 0, :]), ls="-", color="#67A3C1", label="Reflection Intake") plt.plot(frequs, numpy.abs(S[0, 1, :]), ls="--", color="#67A3C1", label="Transmission Intake") plt.plot(frequs, numpy.abs(S[1, 1, :]), ls="-", color="#D38D7B", label="Reflection Outlet") plt.plot(frequs, numpy.abs(S[1 ,0, :]), ls="--", color="#D38D7B", label="Transmission Outlet") plt.xlabel("Frequency [Hz]") plt.ylabel("Scattering Magnitude") plt.xlim([300,3200]) plt.ylim([0,1.1]) plt.legend() plt.show()
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
PCA with MaxAbsScaler This code template is for simple Principal Component Analysis(PCA) along feature scaling via MaxAbsScaler in python for dimensionality reduction technique. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance. Required Packages
import warnings import itertools import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from sklearn.decomposition import PCA from sklearn.preprocessing import LabelEncoder, MaxAbsScaler warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ''
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
List of features which are required for model training .
#x_values features= []
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Target feature for prediction.
#y_value target= ''
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
X = df[features] Y = df[target]
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data RescalingUsed sklearn.preprocessing.MaxAbsScalerScale each feature by its maximum absolute value.This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
X_Scaled=MaxAbsScaler().fit_transform(X) X=pd.DataFrame(X_Scaled,columns=X.columns) X.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Choosing the number of componentsA vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components.
pcaComponents = PCA().fit(X_Scaled) plt.plot(np.cumsum(pcaComponents.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
PC_values = np.arange(pcaComponents.n_components_) + 1 plt.plot(PC_values, pcaComponents.explained_variance_ratio_, 'ro-', linewidth=2) plt.title('Scree Plot') plt.xlabel('Principal Component') plt.ylabel('Proportion of Variance Explained') plt.show()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
ModelPCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, PCA is implemented as a transformer object that learns components in its fit method, and can be used on new data to project it on these components. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)
pca = PCA(n_components=8) pcaX = pd.DataFrame(data = pca.fit_transform(X_Scaled))
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Output Dataframe
finalDf = pd.concat([pcaX, Y], axis = 1) finalDf.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Parallel, Multi-Objective BO in BoTorch with qEHVI and qParEGOIn this tutorial, we illustrate how to implement a simple multi-objective (MO) Bayesian Optimization (BO) closed loop in BoTorch.We use the parallel ParEGO ($q$ParEGO) [1] and parallel Expected Hypervolume Improvement ($q$EHVI) [1] acquisition functions to optimize a synthetic Branin-Currin test function. The two objectives are$$f^{(1)}(x_1\text{'}, x_2\text{'}) = (x_2\text{'} - \frac{5.1}{4 \pi^ 2} (x_1\text{'})^2 + \frac{5}{\pi} x_1\text{'} - r)^2 + 10 (1-\frac{1}{8 \pi}) \cos(x_1\text{'}) + 10$$$$f^{(2)}(x_1, x_2) = \bigg[1 - \exp\bigg(-\frac{1} {(2x_2)}\bigg)\bigg] \frac{2300 x_1^3 + 1900x_1^2 + 2092 x_1 + 60}{100 x_1^3 + 500x_1^2 + 4x_1 + 20}$$where $x_1, x_2 \in [0,1]$, $x_1\text{'} = 15x_1 - 5$, and $x_2\text{'} = 15x_2$ (parameter values can be found in `botorch/test_functions/multi_objective.py`).Since botorch assumes a maximization of all objectives, we seek to find the pareto frontier, the set of optimal trade-offs where improving one metric means deteriorating another.[1] [S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.](https://arxiv.org/abs/2006.05078) Set dtype and deviceNote: $q$EHVI aggressively exploits parallel hardware and is much faster when run on a GPU. See [1] for details.
import os import torch tkwargs = { "dtype": torch.double, "device": torch.device("cuda" if torch.cuda.is_available() else "cpu"), } SMOKE_TEST = os.environ.get("SMOKE_TEST")
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Problem setup
from botorch.test_functions.multi_objective import BraninCurrin problem = BraninCurrin(negate=True).to(**tkwargs)
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Model initializationWe use a multi-output `SingleTaskGP` to model the two objectives with a homoskedastic Gaussian likelihood with an inferred noise level.The models are initialized with $2(d+1)=6$ points drawn randomly from $[0,1]^2$.
from botorch.models.gp_regression import SingleTaskGP from botorch.models.transforms.outcome import Standardize from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood from botorch.utils.transforms import unnormalize from botorch.utils.sampling import draw_sobol_samples def generate_initial_data(n=6): # generate training data train_x = draw_sobol_samples( bounds=problem.bounds,n=1, q=n, seed=torch.randint(1000000, (1,)).item() ).squeeze(0) train_obj = problem(train_x) return train_x, train_obj def initialize_model(train_x, train_obj): # define models for objective and constraint model = SingleTaskGP(train_x, train_obj, outcome_transform=Standardize(m=train_obj.shape[-1])) mll = ExactMarginalLogLikelihood(model.likelihood, model) return mll, model
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Define a helper function that performs the essential BO step for $q$EHVIThe helper function below initializes the $q$EHVI acquisition function, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=4$. Passing the keyword argument `sequential=True` to the function `optimize_acqf`specifies that candidates should be optimized in a sequential greedy fashion (see [1] for details why this is important). A simple initialization heuristic is used to select the 20 restart initial locations from a set of 1024 random points. Multi-start optimization of the acquisition function is performed using LBFGS-B with exact gradients computed via auto-differentiation.**Reference Point**$q$EHVI requires specifying a reference point, which is the lower bound on the objectives used for computing hypervolume. In this tutorial, we assume the reference point is known. In practice the reference point can be set 1) using domain knowledge to be slightly worse than the lower bound of objective values, where the lower bound is the minimum acceptable value of interest for each objective, or 2) using a dynamic reference point selection strategy.**Partitioning the Non-dominated Space into disjoint rectangles**$q$EHVI requires partitioning the non-dominated space into disjoint rectangles (see [1] for details). *Note:* `NondominatedPartitioning` *will be very slow when 1) there are a lot of points on the pareto frontier and 2) there are >3 objectives.*
from botorch.optim.optimize import optimize_acqf, optimize_acqf_list from botorch.acquisition.objective import GenericMCObjective from botorch.utils.multi_objective.scalarization import get_chebyshev_scalarization from botorch.utils.multi_objective.box_decompositions.non_dominated import NondominatedPartitioning from botorch.acquisition.multi_objective.monte_carlo import qExpectedHypervolumeImprovement from botorch.utils.sampling import sample_simplex BATCH_SIZE = 4 if not SMOKE_TEST else 2 NUM_RESTARTS = 20 if not SMOKE_TEST else 2 RAW_SAMPLES = 1024 if not SMOKE_TEST else 4 standard_bounds = torch.zeros(2, problem.dim, **tkwargs) standard_bounds[1] = 1 def optimize_qehvi_and_get_observation(model, train_obj, sampler): """Optimizes the qEHVI acquisition function, and returns a new candidate and observation.""" # partition non-dominated space into disjoint rectangles partitioning = NondominatedPartitioning(ref_point=problem.ref_point, Y=train_obj) acq_func = qExpectedHypervolumeImprovement( model=model, ref_point=problem.ref_point.tolist(), # use known reference point partitioning=partitioning, sampler=sampler, ) # optimize candidates, _ = optimize_acqf( acq_function=acq_func, bounds=standard_bounds, q=BATCH_SIZE, num_restarts=NUM_RESTARTS, raw_samples=RAW_SAMPLES, # used for intialization heuristic options={"batch_limit": 5, "maxiter": 200, "nonnegative": True}, sequential=True, ) # observe new values new_x = unnormalize(candidates.detach(), bounds=problem.bounds) new_obj = problem(new_x) return new_x, new_obj
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Define a helper function that performs the essential BO step for $q$ParEGOThe helper function below similarly initializes $q$ParEGO, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. $q$ParEGO uses random augmented chebyshev scalarization with the `qExpectedImprovement` acquisition function. In the parallel setting ($q>1$), each candidate is optimized in sequential greedy fashion using a different random scalarization (see [1] for details).To do this, we create a list of `qExpectedImprovement` acquisition functions, each with different random scalarization weights. The `optimize_acqf_list` method sequentially generates one candidate per acquisition function and conditions the next candidate (and acquisition function) on the previously selected pending candidates.
def optimize_qparego_and_get_observation(model, train_obj, sampler): """Samples a set of random weights for each candidate in the batch, performs sequential greedy optimization of the qParEGO acquisition function, and returns a new candidate and observation.""" acq_func_list = [] for _ in range(BATCH_SIZE): weights = sample_simplex(problem.num_objectives, **tkwargs).squeeze() objective = GenericMCObjective(get_chebyshev_scalarization(weights=weights, Y=train_obj)) acq_func = qExpectedImprovement( # pyre-ignore: [28] model=model, objective=objective, best_f=objective(train_obj).max(), sampler=sampler, ) acq_func_list.append(acq_func) # optimize candidates, _ = optimize_acqf_list( acq_function_list=acq_func_list, bounds=standard_bounds, num_restarts=NUM_RESTARTS, raw_samples=RAW_SAMPLES, # used for intialization heuristic options={"batch_limit": 5, "maxiter": 200}, ) # observe new values new_x = unnormalize(candidates.detach(), bounds=problem.bounds) new_obj = problem(new_x) return new_x, new_obj
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Perform Bayesian Optimization loop with $q$EHVI and $q$ParEGOThe Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps:1. given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$2. observe $f(x)$ for each $x$ in the batch 3. update the surrogate model. Just for illustration purposes, we run three trials each of which do `N_BATCH=25` rounds of optimization. The acquisition function is approximated using `MC_SAMPLES=128` samples.*Note*: Running this may take a little while.
from botorch import fit_gpytorch_model from botorch.acquisition.monte_carlo import qExpectedImprovement, qNoisyExpectedImprovement from botorch.sampling.samplers import SobolQMCNormalSampler from botorch.exceptions import BadInitialCandidatesWarning from botorch.utils.multi_objective.pareto import is_non_dominated from botorch.utils.multi_objective.hypervolume import Hypervolume import time import warnings warnings.filterwarnings('ignore', category=BadInitialCandidatesWarning) warnings.filterwarnings('ignore', category=RuntimeWarning) N_TRIALS = 3 if not SMOKE_TEST else 2 N_BATCH = 25 if not SMOKE_TEST else 3 MC_SAMPLES = 128 if not SMOKE_TEST else 16 verbose = False hvs_qparego_all, hvs_qehvi_all, hvs_random_all = [], [], [] hv = Hypervolume(ref_point=problem.ref_point) # average over multiple trials for trial in range(1, N_TRIALS + 1): torch.manual_seed(trial) print(f"\nTrial {trial:>2} of {N_TRIALS} ", end="") hvs_qparego, hvs_qehvi, hvs_random = [], [], [] # call helper functions to generate initial training data and initialize model train_x_qparego, train_obj_qparego = generate_initial_data(n=6) mll_qparego, model_qparego = initialize_model(train_x_qparego, train_obj_qparego) train_x_qehvi, train_obj_qehvi = train_x_qparego, train_obj_qparego train_x_random, train_obj_random = train_x_qparego, train_obj_qparego # compute hypervolume mll_qehvi, model_qehvi = initialize_model(train_x_qehvi, train_obj_qehvi) # compute pareto front pareto_mask = is_non_dominated(train_obj_qparego) pareto_y = train_obj_qparego[pareto_mask] # compute hypervolume volume = hv.compute(pareto_y) hvs_qparego.append(volume) hvs_qehvi.append(volume) hvs_random.append(volume) # run N_BATCH rounds of BayesOpt after the initial random batch for iteration in range(1, N_BATCH + 1): t0 = time.time() # fit the models fit_gpytorch_model(mll_qparego) fit_gpytorch_model(mll_qehvi) # define the qEI and qNEI acquisition modules using a QMC sampler qparego_sampler = SobolQMCNormalSampler(num_samples=MC_SAMPLES) qehvi_sampler = SobolQMCNormalSampler(num_samples=MC_SAMPLES) # optimize acquisition functions and get new observations new_x_qparego, new_obj_qparego = optimize_qparego_and_get_observation( model_qparego, train_obj_qparego, qparego_sampler ) new_x_qehvi, new_obj_qehvi = optimize_qehvi_and_get_observation( model_qehvi, train_obj_qehvi, qehvi_sampler ) new_x_random, new_obj_random = generate_initial_data(n=BATCH_SIZE) # update training points train_x_qparego = torch.cat([train_x_qparego, new_x_qparego]) train_obj_qparego = torch.cat([train_obj_qparego, new_obj_qparego]) train_x_qehvi = torch.cat([train_x_qehvi, new_x_qehvi]) train_obj_qehvi = torch.cat([train_obj_qehvi, new_obj_qehvi]) train_x_random = torch.cat([train_x_random, new_x_random]) train_obj_random = torch.cat([train_obj_random, new_obj_random]) # update progress for hvs_list, train_obj in zip( (hvs_random, hvs_qparego, hvs_qehvi), (train_obj_random, train_obj_qparego, train_obj_qehvi), ): # compute pareto front pareto_mask = is_non_dominated(train_obj) pareto_y = train_obj[pareto_mask] # compute hypervolume volume = hv.compute(pareto_y) hvs_list.append(volume) # reinitialize the models so they are ready for fitting on next iteration # Note: we find improved performance from not warm starting the model hyperparameters # using the hyperparameters from the previous iteration mll_qparego, model_qparego = initialize_model(train_x_qparego, train_obj_qparego) mll_qehvi, model_qehvi = initialize_model(train_x_qehvi, train_obj_qehvi) t1 = time.time() if verbose: print( f"\nBatch {iteration:>2}: Hypervolume (random, qParEGO, qEHVI) = " f"({hvs_random[-1]:>4.2f}, {hvs_qparego[-1]:>4.2f}, {hvs_qehvi[-1]:>4.2f}), " f"time = {t1-t0:>4.2f}.", end="" ) else: print(".", end="") hvs_qparego_all.append(hvs_qparego) hvs_qehvi_all.append(hvs_qehvi) hvs_random_all.append(hvs_random)
Trial 1 of 3 ......................... Trial 2 of 3 ......................... Trial 3 of 3 .........................
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Plot the resultsThe plot below shows the a common metric of multi-objective optimization performance, the log hypervolume difference: the log difference between the hypervolume of the true pareto front and the hypervolume of the approximate pareto front identified by each algorithm. The log hypervolume difference is plotted at each step of the optimization for each of the algorithms. The confidence intervals represent the variance at that step in the optimization across the trial runs. The variance across optimization runs is quite high, so in order to get a better estimate of the average performance one would have to run a much larger number of trials `N_TRIALS` (we avoid this here to limit the runtime of this tutorial). The plot show that $q$EHVI vastly outperforms the $q$ParEGO and Sobol baselines and has very low variance.
import numpy as np from matplotlib import pyplot as plt %matplotlib inline def ci(y): return 1.96 * y.std(axis=0) / np.sqrt(N_TRIALS) iters = np.arange(N_BATCH + 1) * BATCH_SIZE log_hv_difference_qparego = np.log10(problem.max_hv - np.asarray(hvs_qparego_all)) log_hv_difference_qehvi = np.log10(problem.max_hv - np.asarray(hvs_qehvi_all)) log_hv_difference_rnd = np.log10(problem.max_hv - np.asarray(hvs_random_all)) fig, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.errorbar( iters, log_hv_difference_rnd.mean(axis=0), yerr=ci(log_hv_difference_rnd), label="Sobol", linewidth=1.5, ) ax.errorbar( iters, log_hv_difference_qparego.mean(axis=0), yerr=ci(log_hv_difference_qparego), label="qParEGO", linewidth=1.5, ) ax.errorbar( iters, log_hv_difference_qehvi.mean(axis=0), yerr=ci(log_hv_difference_qehvi), label="qEHVI", linewidth=1.5, ) ax.set(xlabel='number of observations (beyond initial points)', ylabel='Log Hypervolume Difference') ax.legend(loc="lower right")
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
plot the observations colored by iterationTo examine optimization process from another perspective, we plot the collected observations under each algorithm where the color corresponds to the BO iteration at which the point was collected. The plot on the right for $q$EHVI shows that the $q$EHVI quickly identifies the pareto front and most of its evaluations are very close to the pareto front. $q$ParEGO also identifies has many observations close to the pareto front, but relies on optimizing random scalarizations, which is a less principled way of optimizing the pareto front compared to $q$EHVI, which explicitly attempts focuses on improving the pareto front. Sobol generates random points and has few points close to the pareto front
from matplotlib.cm import ScalarMappable fig, axes = plt.subplots(1, 3, figsize=(17, 5)) algos = ["Sobol", "qParEGO", "qEHVI"] cm = plt.cm.get_cmap('viridis') batch_number = torch.cat( [torch.zeros(6), torch.arange(1, N_BATCH+1).repeat(BATCH_SIZE, 1).t().reshape(-1)] ).numpy() for i, train_obj in enumerate((train_obj_random, train_obj_qparego, train_obj_qehvi)): sc = axes[i].scatter( train_obj[:, 0].cpu().numpy(), train_obj[:,1].cpu().numpy(), c=batch_number, alpha=0.8, ) axes[i].set_title(algos[i]) axes[i].set_xlabel("Objective 1") axes[i].set_xlim(-260, 5) axes[i].set_ylim(-15, 0) axes[0].set_ylabel("Objective 2") norm = plt.Normalize(batch_number.min(), batch_number.max()) sm = ScalarMappable(norm=norm, cmap=cm) sm.set_array([]) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([0.93, 0.15, 0.01, 0.7]) cbar = fig.colorbar(sm, cax=cbar_ax) cbar.ax.set_title("Iteration")
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks). Amazon Web Services (AWS)* SSH to EC2* S3cmd* s3-parallel-put* S3DistCp* Redshift* Kinesis* Lambda SSH to EC2 Connect to an Ubuntu EC2 instance through SSH with the given key:
!ssh -i key.pem ubuntu@ipaddress
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Connect to an Amazon Linux EC2 instance through SSH with the given key:
!ssh -i key.pem ec2-user@ipaddress
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
S3cmdBefore I discovered [S3cmd](http://s3tools.org/s3cmd), I had been using the [S3 console](http://aws.amazon.com/console/) to do basic operations and [boto](https://boto.readthedocs.org/en/latest/) to do more of the heavy lifting. However, sometimes I just want to hack away at a command line to do my work.I've found S3cmd to be a great command line tool for interacting with S3 on AWS. S3cmd is written in Python, is open source, and is free even for commercial use. It offers more advanced features than those found in the [AWS CLI](http://aws.amazon.com/cli/). Install s3cmd:
!sudo apt-get install s3cmd
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Running the following command will prompt you to enter your AWS access and AWS secret keys. To follow security best practices, make sure you are using an IAM account as opposed to using the root account.I also suggest enabling GPG encryption which will encrypt your data at rest, and enabling HTTPS to encrypt your data in transit. Note this might impact performance.
!s3cmd --configure
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Frequently used S3cmds:
# List all buckets !s3cmd ls # List the contents of the bucket !s3cmd ls s3://my-bucket-name # Upload a file into the bucket (private) !s3cmd put myfile.txt s3://my-bucket-name/myfile.txt # Upload a file into the bucket (public) !s3cmd put --acl-public --guess-mime-type myfile.txt s3://my-bucket-name/myfile.txt # Recursively upload a directory to s3 !s3cmd put --recursive my-local-folder-path/ s3://my-bucket-name/mydir/ # Download a file !s3cmd get s3://my-bucket-name/myfile.txt myfile.txt # Recursively download files that start with myfile !s3cmd --recursive get s3://my-bucket-name/myfile # Delete a file !s3cmd del s3://my-bucket-name/myfile.txt # Delete a bucket !s3cmd del --recursive s3://my-bucket-name/ # Create a bucket !s3cmd mb s3://my-bucket-name # List bucket disk usage (human readable) !s3cmd du -H s3://my-bucket-name/ # Sync local (source) to s3 bucket (destination) !s3cmd sync my-local-folder-path/ s3://my-bucket-name/ # Sync s3 bucket (source) to local (destination) !s3cmd sync s3://my-bucket-name/ my-local-folder-path/ # Do a dry-run (do not perform actual sync, but get information about what would happen) !s3cmd --dry-run sync s3://my-bucket-name/ my-local-folder-path/ # Apply a standard shell wildcard include to sync s3 bucket (source) to local (destination) !s3cmd --include '2014-05-01*' sync s3://my-bucket-name/ my-local-folder-path/
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
s3-parallel-put[s3-parallel-put](https://github.com/twpayne/s3-parallel-put.git) is a great tool for uploading multiple files to S3 in parallel. Install package dependencies:
!sudo apt-get install boto !sudo apt-get install git
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Clone the s3-parallel-put repo:
!git clone https://github.com/twpayne/s3-parallel-put.git
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Setup AWS keys for s3-parallel-put:
!export AWS_ACCESS_KEY_ID=XXX !export AWS_SECRET_ACCESS_KEY=XXX
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Sample usage:
!s3-parallel-put --bucket=bucket --prefix=PREFIX SOURCE
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Dry run of putting files in the current directory on S3 with the given S3 prefix, do not check first if they exist:
!s3-parallel-put --bucket=bucket --host=s3.amazonaws.com --put=stupid --dry-run --prefix=prefix/ ./
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
S3DistCp[S3DistCp](http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html) is an extension of DistCp that is optimized to work with Amazon S3. S3DistCp is useful for combining smaller files and aggregate them together, taking in a pattern and target file to combine smaller input files to larger ones. S3DistCp can also be used to transfer large volumes of data from S3 to your Hadoop cluster. To run S3DistCp with the EMR command line, ensure you are using the proper version of Ruby:
!rvm --default ruby-1.8.7-p374
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
The EMR command line below executes the following:* Create a master node and slave nodes of type m1.small* Runs S3DistCp on the source bucket location and concatenates files that match the date regular expression, resulting in files that are roughly 1024 MB or 1 GB* Places the results in the destination bucket
!./elastic-mapreduce --create --instance-group master --instance-count 1 \ --instance-type m1.small --instance-group core --instance-count 4 \ --instance-type m1.small --jar /home/hadoop/lib/emr-s3distcp-1.0.jar \ --args "--src,s3://my-bucket-source/,--groupBy,.*([0-9]{4}-01).*,\ --dest,s3://my-bucket-dest/,--targetSize,1024"
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
For further optimization, compression can be helpful to save on AWS storage and bandwidth costs, to speed up the S3 to/from EMR transfer, and to reduce disk I/O. Note that compressed files are not easy to split for Hadoop. For example, Hadoop uses a single mapper per GZIP file, as it does not know about file boundaries.What type of compression should you use?* Time sensitive job: Snappy or LZO* Large amounts of data: GZIP* General purpose: GZIP, as it’s supported by most platformsYou can specify the compression codec (gzip, lzo, snappy, or none) to use for copied files with S3DistCp with –outputCodec. If no value is specified, files are copied with no compression change. The code below sets the compression to lzo:
--outputCodec,lzo
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Redshift Copy values from the given S3 location containing CSV files to a Redshift cluster:
copy table_name from 's3://source/part' credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX' csv;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Copy values from the given location containing TSV files to a Redshift cluster:
copy table_name from 's3://source/part' credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX' csv delimiter '\t';
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
View Redshift errors:
select * from stl_load_errors;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Vacuum Redshift in full:
VACUUM FULL;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Analyze the compression of a table:
analyze compression table_name;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Cancel the query with the specified id:
cancel 18764;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
The CANCEL command will not abort a transaction. To abort or roll back a transaction, you must use the ABORT or ROLLBACK command. To cancel a query associated with a transaction, first cancel the query then abort the transaction.If the query that you canceled is associated with a transaction, use the ABORT or ROLLBACK. command to cancel the transaction and discard any changes made to the data:
abort;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Reference table creation and setup: ![alt text](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model.png)
CREATE TABLE part ( p_partkey integer not null sortkey distkey, p_name varchar(22) not null, p_mfgr varchar(6) not null, p_category varchar(7) not null, p_brand1 varchar(9) not null, p_color varchar(11) not null, p_type varchar(25) not null, p_size integer not null, p_container varchar(10) not null ); CREATE TABLE supplier ( s_suppkey integer not null sortkey, s_name varchar(25) not null, s_address varchar(25) not null, s_city varchar(10) not null, s_nation varchar(15) not null, s_region varchar(12) not null, s_phone varchar(15) not null) diststyle all; CREATE TABLE customer ( c_custkey integer not null sortkey, c_name varchar(25) not null, c_address varchar(25) not null, c_city varchar(10) not null, c_nation varchar(15) not null, c_region varchar(12) not null, c_phone varchar(15) not null, c_mktsegment varchar(10) not null) diststyle all; CREATE TABLE dwdate ( d_datekey integer not null sortkey, d_date varchar(19) not null, d_dayofweek varchar(10) not null, d_month varchar(10) not null, d_year integer not null, d_yearmonthnum integer not null, d_yearmonth varchar(8) not null, d_daynuminweek integer not null, d_daynuminmonth integer not null, d_daynuminyear integer not null, d_monthnuminyear integer not null, d_weeknuminyear integer not null, d_sellingseason varchar(13) not null, d_lastdayinweekfl varchar(1) not null, d_lastdayinmonthfl varchar(1) not null, d_holidayfl varchar(1) not null, d_weekdayfl varchar(1) not null) diststyle all; CREATE TABLE lineorder ( lo_orderkey integer not null, lo_linenumber integer not null, lo_custkey integer not null, lo_partkey integer not null distkey, lo_suppkey integer not null, lo_orderdate integer not null sortkey, lo_orderpriority varchar(15) not null, lo_shippriority varchar(1) not null, lo_quantity integer not null, lo_extendedprice integer not null, lo_ordertotalprice integer not null, lo_discount integer not null, lo_revenue integer not null, lo_supplycost integer not null, lo_tax integer not null, lo_commitdate integer not null, lo_shipmode varchar(10) not null );
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
| Table name | Sort Key | Distribution Style ||------------|--------------|--------------------|| LINEORDER | lo_orderdate | lo_partkey || PART | p_partkey | p_partkey || CUSTOMER | c_custkey | ALL || SUPPLIER | s_suppkey | ALL || DWDATE | d_datekey | ALL | [Sort Keys](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-sort-keys.html)When you create a table, you can specify one or more columns as the sort key. Amazon Redshift stores your data on disk in sorted order according to the sort key. How your data is sorted has an important effect on disk I/O, columnar compression, and query performance.Choose sort keys for based on these best practices:If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key.If you do frequent range filtering or equality filtering on one column, specify that column as the sort key.If you frequently join a (dimension) table, specify the join column as the sort key. [Distribution Styles](http://docs.aws.amazon.com/redshift/latest/dg/c_choosing_dist_sort.html)When you create a table, you designate one of three distribution styles: KEY, ALL, or EVEN.**KEY distribution**The rows are distributed according to the values in one column. The leader node will attempt to place matching values on the same node slice. If you distribute a pair of tables on the joining keys, the leader node collocates the rows on the slices according to the values in the joining columns so that matching values from the common columns are physically stored together.**ALL distribution**A copy of the entire table is distributed to every node. Where EVEN distribution or KEY distribution place only a portion of a table's rows on each node, ALL distribution ensures that every row is collocated for every join that the table participates in.**EVEN distribution**The rows are distributed across the slices in a round-robin fashion, regardless of the values in any particular column. EVEN distribution is appropriate when a table does not participate in joins or when there is not a clear choice between KEY distribution and ALL distribution. EVEN distribution is the default distribution style. Kinesis Create a stream:
!aws kinesis create-stream --stream-name Foo --shard-count 1 --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
List all streams:
!aws kinesis list-streams --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Get info about the stream:
!aws kinesis describe-stream --stream-name Foo --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Put a record to the stream:
!aws kinesis put-record --stream-name Foo --data "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=" --partition-key shardId-000000000000 --region us-east-1 --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Get records from a given shard:
!SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name Foo --query 'ShardIterator' --profile adminuser) aws kinesis get-records --shard-iterator $SHARD_ITERATOR
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Delete a stream:
!aws kinesis delete-stream --stream-name Foo --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Lambda List lambda functions:
!aws lambda list-functions \ --region us-east-1 \ --max-items 10
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Upload a lambda function:
!aws lambda upload-function \ --region us-east-1 \ --function-name foo \ --function-zip file-path/foo.zip \ --role IAM-role-ARN \ --mode event \ --handler foo.handler \ --runtime nodejs \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Invoke a lambda function:
!aws lambda invoke-async \ --function-name foo \ --region us-east-1 \ --invoke-args foo.txt \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Return metadata for a specific function:
!aws lambda get-function-configuration \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Return metadata for a specific function along with a presigned URL that you can use to download the function's .zip file that you uploaded:
!aws lambda get-function \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Add an event source:
!aws lambda add-event-source \ --region us-east-1 \ --function-name ProcessKinesisRecords \ --role invocation-role-arn \ --event-source kinesis-stream-arn \ --batch-size 100 \ --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Delete a lambda function:
!aws lambda delete-function \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks