markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Getting day time features for the order_file.
order_comp['day']=order_comp.created_at.dt.day order_comp['hour']=order_comp.created_at.dt.hour order_comp['weekday']=order_comp.created_at.dt.dayofweek order_comp['geography']=order_comp.pickup_locality order_comp_test['day']=order_comp_test.created_at.dt.day order_comp_test['hour']=order_comp_test.created_at.dt.hour order_comp_test['weekday']=order_comp_test.created_at.dt.dayofweek order_comp_test['geography']=order_comp_test.pickup_locality order_comp['count_order']=1 order_comp_test['count_order']=1
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='nlp'>4.Preprocessing the data. </a>
def lower(x): return x.lower() order_comp.geography=order_comp.geography.apply(lower) train.geography=train.geography.apply(lower) train.head() order_comp.replace({'hsr layout':'hsr_layout'},inplace=True) from sklearn.preprocessing import LabelEncoder le_geo=LabelEncoder() order_comp.geography=le_geo.fit_transform(order_comp.geography) train.geography=le_geo.fit_transform(train.geography) order_comp_test.geography=le_geo.fit_transform(order_comp_test.geography) test.geography=le_geo.fit_transform(test.geography)
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
Authors of the data have very cleverly organised data by only removing the data of 2nd Jan to 12th Jan and remaining data is now used for the predicting the orders per hour,order per week and orders per day-hour combinaiton
train.head()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
correct the weekday
order_comp.head() train.weekday=train.weekday-1 test.weekday=test.weekday-1
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='ot5'>3.5 Aggregate features on order file.</a> Aggregate features. 1) Total orders within a geography in an hour of a weekday<br> 2) Total orders in a weekday<br> 3) Total orders in a day of hour.<br> Combiing all of them with train
order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum() order_comp.loc[order_comp.day>1].groupby(by=['weekday'],as_index=False).count_order.sum() order_comp.loc[order_comp.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum() order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum() order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday'],as_index=False).count_order.sum() order_comp_test.loc[order_comp_test.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum() train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum(),on=['geography','weekday','hour'] ,how='left') train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday'],as_index=False).count_order.sum(),on=['geography','weekday'] ,how='left') train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum(),on=['weekday','hour'] ,how='left',suffixes=('','_1')) train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['weekday'],as_index=False).count_order.sum(),on=['weekday'] ,how='left',suffixes=('','_2')) #train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum(),on=['day','hour'] # ,how='left') train.head() test=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum(),on=['geography','weekday','hour'] ,how='left') test=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['geography','weekday'],as_index=False).count_order.sum(),on=['geography','weekday'] ,how='left') test=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum(),on=['weekday','hour'] ,how='left',suffixes=('','_1')) test=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday'],as_index=False).count_order.sum(),on=['weekday'] ,how='left',suffixes=('','_2')) def dat1(X): return(datetime.datetime.strptime(X, "%Y-%b-%d %H:%M:%S")) from dateutil.parser import parse def date1(x): return parse(x) driver_log.login_time=driver_log.login_time.apply(date1) driver_log.logout_time=driver_log.logout_time.apply(date1) driver_log_test.login_time=driver_log_test.login_time.apply(date1) driver_log_test.logout_time=driver_log_test.logout_time.apply(date1) driver_log_test.head() (driver_log_test.logout_time-driver_log_test.login_time).dt.seconds.head()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='nlp'>4.Making Model. </a> <a id='nlp'>4.Preparing validation set. </a> Making validation set ! Here normal train and test won't work so divide according to date and treat it as a test.
valid =train.loc[train.dt == '31-Jan-18'].reset_index(drop=True) train_val =train.loc[train.dt != '31-Jan-18'].reset_index(drop=True) len(test.columns) len(train.columns) test.columns train.columns col = [ 'Latitude', 'Longitude', 'res_id', 'minute', 'geography', 'stockout_hour', 'stockout_week_hour', 'stockout_hour_minute', 'stockout_hour_week_minute','stockout_x', 'stockout_hour_res', 'stockout_hour_res_minute', 'stockout_counthour_res_minute', 'count_order_x',]
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
using RandomForest model
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,BaggingClassifier,AdaBoostClassifier import xgboost as xgb X=train[col] y=train.stockout train_X=train_val[col] train_y=train_val.stockout test_X=valid[col] test_y=valid.stockout
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='nlp2'>4.2 Applying the RandomForestclassifier</a>
clf=RandomForestClassifier(n_estimators=30,max_depth=7)#highest_accuracy clf=RandomForestClassifier(n_estimators=30,max_depth=7,)#highest_accuracy #clf=AdaBoostClassifier(base_estimator=clf) import lightgbm as lgb from sklearn.tree import ExtraTreeClassifier #clf = lgb.LGBMClassifier(max_depth=8,n_estimators=1000,random_state=5) #clf=ExtraTreeClassifier(max_depth=7) #clf=Dec #clf=GradientBoostingClassifier(n_estimators=100,max_depth=7) clf.fit(train_X[col],train_y,) #a=test_y.copy() a=np.zeros(test_y.shape) from sklearn.metrics import accuracy_score,confusion_matrix print('prediciton->',accuracy_score(test_y,clf.predict(test_X[col]))) print('zeroes->',accuracy_score(test_y,a)) confusion_matrix(test_y,clf.predict(test_X[col])) valid.loc[valid.stockout==1] valid.iloc[clf.predict(test_X[col])==1] test
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
Feature Importance chart
import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline plt.figure(figsize=(20,5)) sns.barplot(col,clf.feature_importances_) pd.DataFrame(clf.feature_importances_,index=col).sort_values(by=0) sub=pd.read_csv('test/submission_online_testcase.csv') clf.fit(X,y) sub['stockout']=clf.predict(test[X.columns])[:len(sub)] sub.to_csv('submi/hmsub_allfeat_count.csv',index=None) sub.stockout.sum()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> Creating a numpy array from an image file:</p> <br> Lets choose a WIFIRE satellite image file as an ndarray and display its type.
from skimage import data photo_data = misc.imread('./wifire/sd-3layers.jpg') type(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
Let's see what is in this image.
plt.figure(figsize=(15,15)) plt.imshow(photo_data) photo_data.shape #print(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
The shape of the ndarray show that it is a three layered matrix. The first two numbers here are length and width, and the third number (i.e. 3) is for three layers: Red, Green and Blue. <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> RGB Color Mapping in the Photo:</p> <br> <ul> <li><p style="font-family: Arial; font-size:1.75em;color:red; font-style:bold"> RED pixel indicates Altitude</p> <li><p style="font-family: Arial; font-size:1.75em;color:blue; font-style:bold"> BLUE pixel indicates Aspect </p> <li><p style="font-family: Arial; font-size:1.75em;color:green; font-style:bold"> GREEN pixel indicates Slope </p> </ul> <br> The higher values denote higher altitude, aspect and slope.
photo_data.size photo_data.min(), photo_data.max() photo_data.mean()
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Pixel on the 150th Row and 250th Column</p>
photo_data[150, 250] photo_data[150, 250, 1]
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Set a Pixel to All Zeros</p> <br/> We can set all three layer in a pixel as once by assigning zero globally to that (row,column) pairing. However, setting one pixel to zero is not noticeable.
#photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[150, 250] = 0 plt.figure(figsize=(10,10)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Changing colors in a Range<p/> <br/> We can also use a range to change the pixel values. As an example, let's set the green layer for rows 200 t0 800 to full intensity.
photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[200:800, : ,1] = 255 plt.figure(figsize=(10,10)) plt.imshow(photo_data) photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[200:800, :] = 255 plt.figure(figsize=(10,10)) plt.imshow(photo_data) photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[200:800, :] = 0 plt.figure(figsize=(10,10)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Pick all Pixels with Low Values</p>
photo_data = misc.imread('./wifire/sd-3layers.jpg') print("Shape of photo_data:", photo_data.shape) low_value_filter = photo_data < 200 print("Shape of low_value_filter:", low_value_filter.shape)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> Filtering Out Low Values</p> Whenever the low_value_filter is True, set value to 0. <br/>
#import random plt.figure(figsize=(10,10)) plt.imshow(photo_data) photo_data[low_value_filter] = 0 plt.figure(figsize=(10,10)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> More Row and Column Operations</p> <br> You can design complex patters by making cols a function of rows or vice-versa. Here we try a linear relationship between rows and columns.
rows_range = np.arange(len(photo_data)) cols_range = rows_range print(type(rows_range)) photo_data[rows_range, cols_range] = 255 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Masking Images</p> <br>Now let us try something even cooler...a mask that is in shape of a circular disc. <img src="./1494532821.png" align="left" style="width:550px;height:360px;"/>
total_rows, total_cols, total_layers = photo_data.shape #print("photo_data = ", photo_data.shape) X, Y = np.ogrid[:total_rows, :total_cols] #print("X = ", X.shape, " and Y = ", Y.shape) center_row, center_col = total_rows / 2, total_cols / 2 #print("center_row = ", center_row, "AND center_col = ", center_col) #print(X - center_row) #print(Y - center_col) dist_from_center = (X - center_row)**2 + (Y - center_col)**2 #print(dist_from_center) radius = (total_rows / 2)**2 #print("Radius = ", radius) circular_mask = (dist_from_center > radius) #print(circular_mask) print(circular_mask[1500:1700,2000:2200]) photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[circular_mask] = 0 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> Further Masking</p> <br/>You can further improve the mask, for example just get upper half disc.
X, Y = np.ogrid[:total_rows, :total_cols] half_upper = X < center_row # this line generates a mask for all rows above the center half_upper_mask = np.logical_and(half_upper, circular_mask) photo_data = misc.imread('./wifire/sd-3layers.jpg') photo_data[half_upper_mask] = 255 #photo_data[half_upper_mask] = random.randint(200,255) plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br> Further Processing of our Satellite Imagery </p> <p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"> Processing of RED Pixels</p> Remember that red pixels tell us about the height. Let us try to highlight all the high altitude areas. We will do this by detecting high intensity RED Pixels and muting down other areas.
photo_data = misc.imread('./wifire/sd-3layers.jpg') red_mask = photo_data[:, : ,0] < 150 photo_data[red_mask] = 0 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Detecting Highl-GREEN Pixels</p>
photo_data = misc.imread('./wifire/sd-3layers.jpg') green_mask = photo_data[:, : ,1] < 150 photo_data[green_mask] = 0 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Detecting Highly-BLUE Pixels</p>
photo_data = misc.imread('./wifire/sd-3layers.jpg') blue_mask = photo_data[:, : ,2] < 150 photo_data[blue_mask] = 0 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
<p style="font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold"><br> Composite mask that takes thresholds on all three layers: RED, GREEN, BLUE</p>
photo_data = misc.imread('./wifire/sd-3layers.jpg') red_mask = photo_data[:, : ,0] < 150 green_mask = photo_data[:, : ,1] > 100 blue_mask = photo_data[:, : ,2] < 100 final_mask = np.logical_and(red_mask, green_mask, blue_mask) photo_data[final_mask] = 0 plt.figure(figsize=(15,15)) plt.imshow(photo_data)
python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb
rvm-segfault/edx
apache-2.0
Forests of randomized trees The sklearn.ensemble module includes two averaging algorithms based on randomized decision trees: the RandomForest algorithm and the Extra-Trees method. These two has perturb-and-combine style: a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers.
from sklearn.ensemble import RandomForestClassifier X = [[0,0], [1,1]] Y = [0, 1] clf = RandomForestClassifier(n_estimators=10) clf = clf.fit(X,Y)
ML Algorithm - Random Forests.ipynb
machlearn/ipython-notebooks
mit
DP references Dynamic Programming - From Novice to Advanced, by Dumitru topcoder member (link) Monge arrays Have a look at https://en.wikipedia.org/wiki/Monge_array
def parity_numbered_rows(matrix, parity, include_index=False): start = 0 if parity == 'even' else 1 return [(i, r) if include_index else r for i in range(start, len(matrix), 2) for r in [matrix[i]]] def argmin(iterable, only_index=True): index, minimum = index_min = min(enumerate(iterable), key=operator.itemgetter(1)) return index if only_index else index_min def interleaving(one, another): for o, a in zip_longest(one, another): yield o if a: yield a def is_sorted(iterable, pred=lambda l, g: l <= g): _, *rest = iterable return all(pred(l, g) for l, g in zip(iterable, rest)) def minima_indexes(matrix): if len(matrix) == 1: return [argmin(matrix.pop())] recursion = minima_indexes(parity_numbered_rows(matrix, parity='even')) even_minima = OrderedDict((i, m) for i, m in zip(count(start=0, step=2), recursion)) odd_minima = [argmin(odd_r[start:end]) + start for o, odd_r in parity_numbered_rows(matrix, parity='odd', include_index=True) for start in [even_minima[o-1]] for end in [even_minima[o+1]+1 if o+1 in even_minima else None]] return list(interleaving(even_minima.values(), odd_minima)) def minima(matrix): return [matrix[i][m] for i, m in enumerate(minima_indexes(matrix))] def is_not_monge(matrix): return any(any(matrix[r][m] > matrix[r][i] for i in range(m)) for r, m in enumerate(minima_indexes(matrix)))
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
The following is a Monge array:
matrix = [ [10, 17, 13, 28, 23], [17, 22, 16, 29, 23], [24, 28, 22, 34, 24], [11, 13, 6, 17, 7], [45, 44, 32, 37, 23], [36, 33, 19, 21, 6], [75, 66, 51, 53, 34], ] minima(matrix) minima_indexes(matrix) is_not_monge(matrix)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
The following is not a Monge array:
matrix = [ [37, 23, 22, 32], [21, 6, 7, 10], [53, 34, 30, 31], [32, 13, 9, 6], [43, 21, 15, 8], ] minima(matrix) # produces a wrong answer!!! minima_indexes(matrix) is_not_monge(matrix)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
longest_increasing_subsequence
@memo_holder def longest_increasing_subsequence(seq): L = [] for i, current in enumerate(seq): """opt, arg = max([(l, j) for (l, j) in L[:i] if l[-1] < current], key=lambda p: len(p[0]), default=([], tuple())) L.append(opt + [current], (arg, i))""" L.append(max(filter(lambda prefix: prefix[-1] < current, L[:i]), key=len, default=[]) + [current]) return max(L, key=len), L def lis_rec(seq): @lru_cache(maxsize=None) def rec(i): current = seq[i] return max([rec(j) for j in range(i) if seq[j] < current], key=len, default=[]) + [current] return max([rec(i) for i, _ in enumerate(seq)], key=len)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
a simple test case taken from page 157:
seq = [5, 2, 8, 6, 3, 6, 9, 7] # see page 157 subseq, memo_table = longest_increasing_subsequence(seq, memo_table=True) subseq
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
memoization table shows that [2,3,6,7] is another solution:
memo_table lis_rec(seq)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
The following is an average case where the sequence is generated randomly:
length = int(5e3) seq = [randint(0, length) for _ in range(length)] %timeit longest_increasing_subsequence(seq) %timeit lis_rec(seq)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
worst scenario where the sequence is a sorted list, in increasing order:
seq = range(length) %timeit longest_increasing_subsequence(seq) %timeit lis_rec(seq)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
edit_distance
@memo_holder def edit_distance(xs, ys, gap_in_xs=lambda y: 1, # cost of putting a gap in `xs` when reading `y` gap_in_ys=lambda x: 1, # cost of putting a gap in `ys` when reading `x` mismatch=lambda x, y: 1, # cost of mismatch (x, y) in the sense of `==` gap = '▢', mark=lambda s: s.swapcase(), reduce=sum): T = {} T.update({ (i, 0):(xs[:i], gap * i, i) for i in range(len(xs)+1) }) T.update({ (0, j):( gap * j,ys[:j], j) for j in range(len(ys)+1) }) def combine(w, z): a, b, c = zip(w, z) return ''.join(a), ''.join(b), reduce(c) for i, x in enumerate(xs, start=1): for j, y in enumerate(ys, start=1): T[i, j] = min(combine(T[i-1, j], (x, gap, gap_in_ys(x))), combine(T[i, j-1], (gap, y, gap_in_xs(y))), combine(T[i-1, j-1], (x, y, 0) if x == y else (mark(x), mark(y), mismatch(x, y))), key=lambda t: t[2]) return T[len(xs), len(ys)], T (xs, ys, cost), memo_table = edit_distance('exponential', 'polynomial', memo_table=True) print('edit with cost {}:\n\n{}\n{}'.format(cost, xs, ys)) memo_table (xs, ys, cost), memo_table = edit_distance('exponential', 'polynomial', memo_table=True, mismatch=lambda x,y: 10) print('edit with cost {}:\n\n{}\n{}'.format(cost, xs, ys)) memo_table
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
matrix_product_ordering
@memo_holder def matrix_product_ordering(o, w, op): n = len(o) T = {(i,i):(lambda: o[i], 0) for i in range(n)} def combine(i,r,j): t_ir, c_ir = T[i, r] t_rj, c_rj = T[r+1, j] return (lambda: op(t_ir(), t_rj()), c_ir + c_rj + w(i, r+1, j+1)) # w[i]*w[r+1]*w[j+1]) for d in range(1, n): for i in range(n-d): j = i + d T[i, j] = min([combine(i, r, j) for r in range(i, j)], key=lambda t: t[-1]) opt, cost = T[0, n-1] return (opt(), cost), T def parens_str_proxy(w, **kwds): return matrix_product_ordering(o=['▢']*(len(w)-1), w=lambda l, c, r: w[l]*w[c]*w[r], op=lambda a, b: '({} {})'.format(a, b), **kwds) (opt, cost), memo_table = parens_str_proxy(w={0:100, 1:20, 2:1000, 3:2, 4:50}, memo_table=True) opt, cost {k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()} (opt, cost), memo_table = parens_str_proxy(w={i:(i+1) for i in range(10)}, memo_table=True) opt, cost {k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()}
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
http://oeis.org/A180118
from sympy import fibonacci, Matrix, init_printing init_printing() (opt, cost), memo_table = parens_str_proxy(w={i:fibonacci(i+1) for i in range(10)}, memo_table=True) opt, cost {k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()}
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
http://oeis.org/A180664
def to_matrix_cost(dim, memo_table): n, m = dim return Matrix(n, m, lambda n,k: memo_table.get((n, k), (lambda: None, 0))[-1]) to_matrix_cost(dim=(9,9), memo_table=memo_table)
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
longest_common_subsequence
@memo_holder def longest_common_subsequence(A, B, gap_A, gap_B, equal=lambda a, b: 1, shrink_A=lambda a: 0, shrink_B=lambda b: 0, reduce=sum): T = {} T.update({(i, 0):([gap_A]*i, 0) for i in range(len(A)+1)}) T.update({(0, j):([gap_B]*j, 0) for j in range(len(B)+1)}) def combine(w, z): alpha, beta = zip(w, z) return list(chain.from_iterable(alpha)), reduce(beta) for i, a in enumerate(A, start=1): for j, b in enumerate(B, start=1): T[i, j] = combine(T[i-1, j-1], ([a], equal(a, b))) if a == b else max( combine(T[i, j-1], ([gap_B], -shrink_B(b))), combine(T[i-1, j], ([gap_A], -shrink_A(a))), key=lambda t: t[-1]) opt, cost = T[len(A), len(B)] return (opt, cost), T def pprint_memo_table(T, joiner, do=str): return {k:(joiner.join(map(do, v[0])), v[1]) for k, v in T.items()} (opt, cost), memo_table = longest_common_subsequence(A='ADCAAB', B='BAABDCDCAACACBA', gap_A='▢', gap_B='○', #shrink_B=lambda b:1, memo_table=True) print('BAABDCDCAACACBA') print(''.join(opt)) pprint_memo_table(memo_table, joiner='') (opt, cost), memo_table = longest_common_subsequence(A=[0,1,1,2,3,5,8,13,21,34,55], B=[1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, 208012, 742900, 2674440,], gap_A='▢', gap_B='○', #shrink_B=lambda b:1, memo_table=True) print(','.join(map(str, opt))) pprint_memo_table(memo_table, joiner=',')
tutorials/dynamic-programming.ipynb
massimo-nocentini/competitive-programming
mit
NOTE on notation * _x, _y, _z, ...: NumPy 0-d or 1-d arrays * _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays * x, y, z, ...: 0-d or 1-d tensors * X, Y, Z, ...: 2-d or higher dimensional tensors Control Flow Operations Q1. Let x and y be random 0-D tensors. Return x + y if x < y and x - y otherwise. Q2. Let x and y be 0-D int32 tensors randomly selected from 0 to 5. Return x + y 2 if x < y, x - y elif x > y, 0 otherwise. Q3. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as X. Return a boolean tensor that yields True if X equals Y elementwise. Logical Operators Q4. Given x and y below, return the truth value x AND/OR/XOR y element-wise.
x = tf.constant([True, False, False], tf.bool) y = tf.constant([True, True, False], tf.bool)
programming/Python/tensorflow/exercises/Control_Flow.ipynb
diegocavalca/Studies
cc0-1.0
Q5. Given x, return the truth value of NOT x element-wise.
x = tf.constant([True, False, False], tf.bool)
programming/Python/tensorflow/exercises/Control_Flow.ipynb
diegocavalca/Studies
cc0-1.0
Normal file operations and data preparation for later example
# list recursively everything under the root dir ! {hadoop_root + 'bin/hdfs dfs -ls -R /'}
instructor-notes/1-hadoop-streaming-py-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Download some files for later use.
# We will use three ebooks from Project Gutenberg for later example # Pride and Prejudice by Jane Austen: http://www.gutenberg.org/ebooks/1342.txt.utf-8 ! wget http://www.gutenberg.org/ebooks/1342.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/pride-and-prejudice.txt # Alice's Adventures in Wonderland by Lewis Carroll: http://www.gutenberg.org/ebooks/11.txt.utf-8 ! wget http://www.gutenberg.org/ebooks/11.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/alice.txt # The Adventures of Sherlock Holmes by Arthur Conan Doyle: http://www.gutenberg.org/ebooks/1661.txt.utf-8 ! wget http://www.gutenberg.org/ebooks/1661.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/sherlock-holmes.txt # delete existing folders ! {hadoop_root + 'bin/hdfs dfs -rm -R /user/ubuntu/*'} # create input folder ! {hadoop_root + 'bin/hdfs dfs -mkdir /user/ubuntu/input'} # copy the three books to the input folder in HDFS ! {hadoop_root + 'bin/hdfs dfs -copyFromLocal /home/ubuntu/shortcourse/data/wordcount/* /user/ubuntu/input/'} # show if the files are there ! {hadoop_root + 'bin/hdfs dfs -ls -R'}
instructor-notes/1-hadoop-streaming-py-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
2. WordCount Example Let's count the single word frequency in the uploaded three books. Start Yarn, the resource allocator for Hadoop.
# start the hadoop distributed file system ! {hadoop_root + 'sbin/start-yarn.sh'} # wordcount 1 the scripts # Map: /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py # Test locally the map script ! echo "go gators gators beat everyone go glory gators" | \ /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py # Reduce: /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py # Test locally the reduce script ! echo "go gators gators beat everyone go glory gators" | \ /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py | \ sort -k1,1 | \ /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py # run them with Hadoop against the uploaded three books cmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \ '-input input ' + \ '-output output ' + \ '-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \ '-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py ' + \ '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \ '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py' ! {cmd} # list the output ! {hadoop_root + 'bin/hdfs dfs -ls -R output'} # Let's see what's in the output file # delete if previous results exist ! rm -rf /home/ubuntu/shortcourse/tmp/* ! {hadoop_root + 'bin/hdfs dfs -copyToLocal output/part-00000 /home/ubuntu/shortcourse/tmp/wc1-part-00000'} ! tail -n 20 /home/ubuntu/shortcourse/tmp/wc1-part-00000
instructor-notes/1-hadoop-streaming-py-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
3. Exercise: WordCount2 Count the single word frequency, where the words are given in a pattern file. For example, given pattern.txt file, which contains: "a b c d" And the input file is: "d e a c f g h i a b c d". Then the output shoule be: "a 1 b 1 c 2 d 2" Please copy the mapper.py and reduce.py from the first wordcount example to foler "/home/ubuntu/shortcourse/notes/scripts/wordcount2/". The pattern file is given in the wordcount2 folder with name "wc2-pattern.txt" Hint: 1. pass the pattern file using "-file option" and use -cmdenv to pass the file name as environment variable 2. in the mapper, read the pattern file into a set 3. only print out the words that exist in the set
# execise: count the words existing in the given pattern file for the three books cmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \ '-cmdenv PATTERN_FILE=wc2-pattern.txt ' + \ '-input input ' + \ '-output output2 ' + \ '-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount2/mapper.py ' + \ '-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount2/reducer.py ' + \ '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/mapper.py ' + \ '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/reducer.py ' + \ '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt' ! {cmd}
instructor-notes/1-hadoop-streaming-py-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Verify Results Copy the output file to local run the following command, and compare with the downloaded output sort -nrk 2,2 part-00000 | head -n 20 The wc1-part-00000 is the output of the previous wordcount (wordcount1)
! rm -rf /home/ubuntu/shortcourse/tmp/wc2-part-00000 ! {hadoop_root + 'bin/hdfs dfs -copyToLocal output2/part-00000 /home/ubuntu/shortcourse/tmp/wc2-part-00000'} ! cat /home/ubuntu/shortcourse/tmp/wc2-part-00000 | sort -nrk2,2 ! sort -nr -k2,2 /home/ubuntu/shortcourse/tmp/wc1-part-00000 | head -n 20 # stop dfs and yarn !{hadoop_root + 'sbin/stop-yarn.sh'} # don't stop hdfs for now, later use # !{hadoop_stop_hdfs_cmd}
instructor-notes/1-hadoop-streaming-py-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Setting some notebook-wide options Let's start by setting some normalization options (discussed below) and always enable colorbars for the elements we will be displaying:
iris.FUTURE.strict_grib_load = True %opts Image {+framewise} [colorbar=True] Contours [colorbar=True] {+framewise} Curve [xrotation=60]
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Note that it is easy to set global defaults for a project allowing any suitable settings can be made into a default on a per-element basis. Now lets specify the maximum number of frames we will be displaying:
%output max_frames=1000
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
<div class="alert alert-info" role="alert">When working on a live server append ``widgets='live'`` to the line above for greatly improved performance and memory usage </div> Loading our first cube Here is the summary of the first cube containing some surface temperature data:
iris_cube = iris.load_cube(iris.sample_data_path('GloSea4', 'ensemble_001.pp')) iris_cube.coord('latitude').guess_bounds() iris_cube.coord('longitude').guess_bounds() print iris_cube.summary()
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Now we can wrap this Iris cube in a HoloCube:
surface_temperature = hc.HoloCube(iris_cube) surface_temperature
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
A Simple example Here is a simple example of viewing the surface_temperature cube over time with a single line of code. In HoloViews, this datastructure is a HoloMap of Image elements:
surface_temperature.to.image(['longitude', 'latitude'])
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
You can drag the slider to view the surface temperature at different times. Here is how you can view the values of time in the cube via the HoloViews API:
surface_temperature.dimension_values('time')
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
The times shown in the slider are long making the text rather small. We can use the fact that all times are recorded in the year 2011 on the 16th of each month to shorten these dates. Defining how all dates should be formatted as follows will help with readability:
hv.Dimension.type_formatters[datetime.datetime] = "%m/%y %Hh"
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Now let us load a cube showing the pre-industrial air temperature:
air_temperature = hc.HoloCube(iris.load_cube(iris.sample_data_path('pre-industrial.pp')), group='Pre-industrial air temperature') air_temperature.data.coord('longitude').guess_bounds() air_temperature.data.coord('latitude').guess_bounds() air_temperature # Use air_temperature.data.summary() to see the Iris summary (.data is the Iris cube)
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Note that we have the air_temperature available over longitude and latitude but not the time dimensions. As a result, this cube is a single frame when visualized as a temperature map.
(surface_temperature.to.image(['longitude', 'latitude'])+ air_temperature.to.image(['longitude', 'latitude'])(plot=dict(projection=crs.PlateCarree())))
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Next is a fairly involved example that plots data side-by-side in a Layout without using the + operator. This shows how complex plots can be generated with little code and also demonstrates how different HoloViews elements can be combined together. In the following visualization, the curve is a sample of the surface_temperature at longitude and latitude (0,10):
%%opts Layout [fig_inches=(12,7)] Curve [aspect=2 xticks=4 xrotation=20] Points (color=2) Overlay [aspect='equal'] %%opts Image [projection=crs.PlateCarree()] # Sample the surface_temperature at (0,10) temp_curve = surface_temperature.to.curve('time', dynamic=True)[0, 10] # Show surface_temperature and air_temperature with Point (0,10) marked temp_maps = [cb.to.image(['longitude', 'latitude']) * hc.Points([(0,10)]) for cb in [surface_temperature, air_temperature]] # Show everything in a two column layout hv.Layout(temp_maps + [temp_curve]).cols(2).display('all')
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Overlaying data and normalization Lets view the surface temperatures together with the global coastline:
cf.COASTLINE.scale='1000m' %%opts Image [projection=crs.Geostationary()] (cmap='Greens') surface_temperature.to.image(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE)
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
Notice that every frame uses the full dynamic range of the Greens color map. This is because normalization is set to +framewise at the top of the notebook which means every frame is normalized independently. To control normalization, we need to decide on the normalization limits. Let's see the maximum temperature in the cube:
max_surface_temp = surface_temperature.data.data.max() max_surface_temp %%opts Image [projection=crs.Geostationary()] (cmap='Greens') # Declare a humidity dimension with a range from 0 to 0.01 surface_temp_dim = hv.Dimension('surface_temperature', range=(300, max_surface_temp)) # Use it to declare the value dimension of a HoloCube (hc.HoloCube(surface_temperature, vdims=[surface_temp_dim]).to.image(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE))
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
By specifying the normalization range we can reveal different aspects of the data. In the example above we can see a warming effect over time as the dark green areas close to the bottom of the normalization range (200K) vanish. Values outside this range are clipped to the ends of the color map. Lastly, here is a demo of a conversion from surface_temperature to Contours:
%%opts Contours [levels=10] (surface_temperature.to.contours(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE))
doc/Introductory_Tutorial.ipynb
ContinuumIO/cube-explorer
bsd-3-clause
That behavior makes a lot of sense The highest Q happens when an action's 'favorite state' (i.e. when the transform is equal to state) is in s It's annoying to have all those separate $Q$ neurons Perfect opportunity to use the EnsembleArray again (see last lecture) Doesn't change the model at all It just groups things together for you
import nengo model = nengo.Network('Selection') with model: stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)]) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) s_p = nengo.Probe(s) sim = nengo.Simulator(model) sim.run(3.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend() figure(figsize=(8,8)) plot(t, sim.data[qs_p], label='Qs') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Yay, Network Arrays make shorter code! Back to the model: How do we implement the $max$ function? Well, it's just a function, so let's implement it Need to combine all the $Q$ values into one 4-dimensional ensemble Why?
import nengo def maximum(x): result = [0,0,0,0] result[np.argmax(x)] = 1 return result model = nengo.Network('Selection') with model: stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)]) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) Qall = nengo.Ensemble(400, dimensions=4) Action = nengo.Ensemble(200, dimensions=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(Qs.output, Qall) nengo.Connection(Qall, Action, function=maximum) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) action_p = nengo.Probe(Action) s_p = nengo.Probe(s) sim = nengo.Simulator(model) sim.run(3.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend() figure() plot(t, sim.data[qs_p], label='Qs') legend(loc='best') figure() plot(t, sim.data[action_p], label='Action') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Not so great (it looks pretty much the same as the linear case) Very nonlinear function, so neurons are not able to approximate it well Other options? The Standard Neural Network Approach (modified) If you give this problem to a standard neural networks person, what would they do? They'll say this is exactly what neural networks are great at Implement this with mutual inhibition and self-excitation Neural competition 4 "neurons" have excitation from each neuron back to themselves have inhibition from each neuron to all the others Now just put in the input and wait for a while and it will stablize to one option Can we do that? Sure! Just replace each "neuron" with a group of neurons, and compute the desired function on those connections note that this is a very general method of converting any non-realistic neural model into a biologically realistic spiking neuron model (though often you can do a one-for-one neuron conversion as well)
import nengo model = nengo.Network('Selection') with model: stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] ) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) e = 0.1 i = -1 recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] nengo.Connection(Qs.output, Qs.input, transform=recur) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) s_p = nengo.Probe(s) sim = nengo.Simulator(model) sim.run(1.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend() figure() plot(t, sim.data[qs_p], label='Qs') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Oops, that's not quite right Why is it selecting more than one action?
import nengo model = nengo.Network('Selection') with model: stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] ) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) Action = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(Qs.output, Action.input) e = 0.1 i = -1 recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] # Let's force the feedback connection to only consider positive values def positive(x): if x[0]<0: return [0] else: return x pos = Action.add_output('positive', positive) nengo.Connection(pos, Action.input, transform=recur) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) action_p = nengo.Probe(Action.output) s_p = nengo.Probe(s) sim = nengo.Simulator(model) sim.run(1.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend(loc='best') figure() plot(t, sim.data[qs_p], label='Qs') legend(loc='best') figure() plot(t, sim.data[action_p], label='Action') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Now we only influence other Actions when we have a positive value Note: Is there a more neurally efficient way to do this? Much better Selects one action reliably But still gives values smaller than 1.0 for the output a lot Can we fix that? What if we adjust e?
%pylab inline import nengo def stimulus(t): if t<.3: return [.5,.4] elif .3<t<.5: return [.4,.5] else: return [0,0] model = nengo.Network('Selection') with model: stim = nengo.Node(stimulus) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) Action = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(Qs.output, Action.input) e = .5 i = -1 recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] # Let's force the feedback connection to only consider positive values def positive(x): if x[0]<0: return [0] else: return x pos = Action.add_output('positive', positive) nengo.Connection(pos, Action.input, transform=recur) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) action_p = nengo.Probe(Action.output) s_p = nengo.Probe(s) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/action_selection.py.cfg") sim = nengo.Simulator(model) sim.run(1.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend(loc='best') figure() plot(t, sim.data[qs_p], label='Qs') legend(loc='best') figure() plot(t, sim.data[action_p], label='Action') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
That seems to introduce a new problem The self-excitation is so strong that it can't respond to changes in the input Indeed, any method like this is going to have some form of memory effects Notice that what has been implemented is an integrator (sort of) Could we do anything to help without increasing e too much?
%pylab inline import nengo def stimulus(t): if t<.3: return [.5,.4] elif .3<t<.5: return [.3,.5] else: return [0,0] model = nengo.Network('Selection') with model: stim = nengo.Node(stimulus) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) Action = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(Qs.output, Action.input) e = 0.2 i = -1 recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] def positive(x): if x[0]<0: return [0] else: return x pos = Action.add_output('positive', positive) nengo.Connection(pos, Action.input, transform=recur) def select(x): if x[0]>=0: return [1] else: return [0] sel = Action.add_output('select', select) aValues = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(sel, aValues.input) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) action_p = nengo.Probe(Action.output) aValues_p = nengo.Probe(aValues.output) s_p = nengo.Probe(s) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/action_selection2.py.cfg") sim = nengo.Simulator(model) sim.run(1.) t = sim.trange() plot(t, sim.data[s_p], label="state") legend(loc='best') figure() plot(t, sim.data[qs_p], label='Qs') legend(loc='best') figure() plot(t, sim.data[action_p], label='Action') legend(loc='best') figure() plot(t, sim.data[aValues_p], label='Action Values') legend(loc='best');
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Better behaviour But there's still situations where there's too much memory (see the visualizer) We can reduce this by reducing e
%pylab inline import nengo def stimulus(t): if t<.3: return [.5,.4] elif .3<t<.5: return [.3,.5] else: return [0,0] model = nengo.Network('Selection') with model: stim = nengo.Node(stimulus) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) Action = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) nengo.Connection(Qs.output, Action.input) e = 0.1 i = -1 recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] def positive(x): if x[0]<0: return [0] else: return x pos = Action.add_output('positive', positive) nengo.Connection(pos, Action.input, transform=recur) def select(x): if x[0]>=0: return [1] else: return [0] sel = Action.add_output('select', select) aValues = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(sel, aValues.input) nengo.Connection(stim, s) model.config[nengo.Probe].synapse = nengo.Lowpass(0.01) qs_p = nengo.Probe(Qs.output) action_p = nengo.Probe(Action.output) aValues_p = nengo.Probe(aValues.output) s_p = nengo.Probe(s) #sim = nengo.Simulator(model) #sim.run(1.) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_simple1.py.cfg")
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Much less memory, but it's still there And slower to respond to changes Note that this speed is dependent on $e$, $i$, and the time constant of the neurotransmitter used Can be hard to find good values And this gets harder to balance as the number of actions increases Also hard to balance for a wide range of $Q$ values (Does it work for $Q$=[0.9, 0.9, 0.95, 0.9] and $Q$=[0.2, 0.2, 0.25, 0.2]?) But this is still a pretty standard approach Nice and easy to get working for special cases Don't really need the NEF (if you're willing to assume non-realistic non-spiking neurons) (Although really, if you're not looking for biological realism, why not just compute the max function?) Example: OReilly, R.C. (2006). Biologically Based Computational Models of High-Level Cognition. Science, 314, 91-94. Leabra They tend to use a "kWTA" (k-Winners Take All) approach in their models Set up inhibition so that only $k$ neurons will be active But since that's complex to do, just do the math instead of doing the inhibition We think that doing it their way means that the dynamics of the model will be wrong (i.e. all the effects we saw above are being ignored). Any other options? Biology Let's look at the biology Where is this action selection in the brain? General consensus: the basal ganglia <img src="files/lecture_selection/basal_ganglia.jpg" width="500"> Pretty much all of cortex connects in to this area (via the striatum) Output goes to the thalamus, the central routing system of the brain Disorders of this area of the brain cause problems controlling actions: Parkinson's disease Neurons in the substantia nigra die off Extremely difficult to trigger actions to start Usually physical actions; as disease progresses and more of the SNc is gone, can get cognitive effects too Huntington's disease Neurons in the striatum die off Actions are triggered inappropriately (disinhibition) Small uncontrollable movements Trouble sequencing cognitive actions too Also heavily implicated in reinforcement learning The dopamine levels seem to map onto reward prediction error High levels when get an unexpected reward, low levels when didn't get a reward that was expected <img src="files/lecture_selection/dopamine.png" width="500"> Connectivity diagram: <img src="files/lecture_selection/basal_ganglia2.gif" width="500"> Old terminology: "direct" pathway: cortex -> striatum -> GPi -> thalamus "indirect" pathway: cortex -> striatum -> GPe -> STN -> GPi -> thalamus Then they found: "hyperdirect" pathway: cortex -> STN -> GPi -> thalamus and lots of other connections Activity in the GPi (output) generally always active neurons stop firing when corresponding action is chosen representing [1, 1, 0, 1] instead of [0, 0, 1, 0] Leabra approach Each action has two groups of neurons in the striatum representing $Q(s, a_i)$ and $1-Q(s, a_i)$ ("go" and "no go") Mutual inhibition causes only one of the "go" and one of the "no go" groups to fire GPi neuron get connections from "go" neurons, with value multiplied by -1 (direct pathway) GPi also gets connections from "no go" neurons, but multiplied by -1 (striatum->GPe), then -1 again (GPe->STN), then +1 (STN->GPi) Result in GPi is close to [1, 1, 0, 1] form Seems to match onto the biology okay But why the weird double-inverting thing? Why not skip the GPe and STN entirely? And why split into "go" and "no-go"? Just the direct pathway on its own would be fine Maybe it's useful for some aspect of the learning... What about all those other connections? An alternate model of the Basal Ganglia Maybe the weird structure of the basal ganglia is an attempt to do action selection without doing mutual inhibition Needs to select from a large number of actions Needs to do so quickly, and without the memory effects Gurney, Prescott, and Redgrave, 2001 Let's start with a very simple version <img src="files/lecture_selection/gpr1.png"> Sort of like an "unrolled" version of one step of mutual inhibition Note that both A and B have surround inhibition and local excitation that is 'flipped' (in slightly different ways) on the way to the output Unfortunately this doesn't easily map onto the basal ganglia because of the diffuse inhibition needed from cortex to what might be the striatum (the first layer). Instead, we can get similar functionality using something like the following Notice the importance of the hyperdirect pathway (from cortex to STN). <img src="files/lecture_selection/gpr2.png"> But that's only going to work for very specific $Q$ values. (Here, the winning option is the sum of the losing ones) Need to dynamically adjust the amount of +ve and -ve weighting Here the GPe is adjusting the weighting by monitoring STN & D2 activity. Notice that the GPe gets the same inputs as GPi, but projects back to STN, to 'regulate' the action selection. <img src="files/lecture_selection/gpr3.png"> This turns out to work surprisingly well But extremely hard to analyze its behaviour They showed that it qualitatively matches pretty well So what happens if we convert this into realistic spiking neurons? Use the same approach where one "neuron" in their model is a pool of neurons in the NEF The "neuron model" they use was rectified linear That becomes the function the decoders are computing Neurotransmitter time constants are all known $Q$ values are between 0 and 1 Firing rates max out around 50-100Hz Encoders are all positive and thresholds are chosen for efficiency
%pylab inline mm=1 mp=1 me=1 mg=1 #connection strengths from original model ws=1 wt=1 wm=1 wg=1 wp=0.9 we=0.3 #neuron lower thresholds for various populations e=0.2 ep=-0.25 ee=-0.2 eg=-0.2 le=0.2 lg=0.2 D = 10 tau_ampa=0.002 tau_gaba=0.008 N = 50 radius = 1.5 import nengo from nengo.dists import Uniform model = nengo.Network('Basal Ganglia', seed=4) with model: stim = nengo.Node([0]*D) StrD1 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1), encoders=Uniform(1,1), radius=radius) StrD2 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1), encoders=Uniform(1,1), radius=radius) STN = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ep,1), encoders=Uniform(1,1), radius=radius) GPi = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(eg,1), encoders=Uniform(1,1), radius=radius) GPe = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ee,1), encoders=Uniform(1,1), radius=radius) nengo.Connection(stim, StrD1.input, transform=ws*(1+lg), synapse=tau_ampa) nengo.Connection(stim, StrD2.input, transform=ws*(1-le), synapse=tau_ampa) nengo.Connection(stim, STN.input, transform=wt, synapse=tau_ampa) def func_str(x): #relu-like function if x[0]<e: return 0 return mm*(x[0]-e) strd1_out = StrD1.add_output('func_str', func_str) strd2_out = StrD2.add_output('func_str', func_str) nengo.Connection(strd1_out, GPi.input, transform=-wm, synapse=tau_gaba) nengo.Connection(strd2_out, GPe.input, transform=-wm, synapse=tau_gaba) def func_stn(x): if x[0]<ep: return 0 return mp*(x[0]-ep) stn_out = STN.add_output('func_stn', func_stn) tr=[[wp]*D for i in range(D)] nengo.Connection(stn_out, GPi.input, transform=tr, synapse=tau_ampa) nengo.Connection(stn_out, GPe.input, transform=tr, synapse=tau_ampa) def func_gpe(x): if x[0]<ee: return 0 return me*(x[0]-ee) gpe_out = GPe.add_output('func_gpe', func_gpe) nengo.Connection(gpe_out, GPi.input, transform=-we, synapse=tau_gaba) nengo.Connection(gpe_out, STN.input, transform=-wg, synapse=tau_gaba) Action = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(0.2,1), encoders=Uniform(1,1)) bias = nengo.Node([1]*D) nengo.Connection(bias, Action.input) nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=tau_gaba) def func_gpi(x): if x[0]<eg: return 0 return mg*(x[0]-eg) gpi_out = GPi.add_output('func_gpi', func_gpi) nengo.Connection(gpi_out, Action.input, transform=-3, synapse=tau_gaba) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_good2.py.cfg")
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Notice that we are also flipping the output from [1, 1, 0, 1] to [0, 0, 1, 0] Mostly for our convenience, but we can also add some mutual inhibition there Works pretty well Scales up to many actions Selects quickly Gets behavioural match to empirical data, including timing predictions (!) Also shows interesting oscillations not seen in the original GPR model But these are seen in the real basal ganglia <img src="files/lecture_selection/gpr-latency.png"> Dynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia Let's make sure this works with our original system To make it easy to use the basal ganglia, there is a special network constructor Since this is a major component of the SPA, it's also in that module
%pylab inline import nengo from nengo.dists import Uniform model = nengo.Network(label='Selection') D=4 with model: stim = nengo.Node([0,0]) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=D) nengo.Connection(stim, s) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), encoders=Uniform(1,1)) bias = nengo.Node([1]*D) nengo.Connection(bias, Action.input) nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) nengo.Connection(Qs.output, basal_ganglia.input, synapse=None) nengo.Connection(basal_ganglia.output, Action.input) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_good1.py.cfg")
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
This system seems to work well Still not perfect Matches biology nicely, because of how we implemented it Some more details on the basal ganglia implementation all those parameters come from here <img src="files/lecture_selection/gpr-diagram.png" width="500"> In the original model, each action has a single "neuron" in each area that responds like this: $$ y = \begin{cases} 0 &\mbox{if } x < \epsilon \ m(x- \epsilon) &\mbox{otherwise} \end{cases} $$ These need to get turned into groups of neurons What is the best way to do this? <img src="files/lecture_selection/gpr-tuning.png"> encoders are all +1 intercepts are chosen to be $> \epsilon$ Action Execution Now that we can select an action, how do we perform it? Depends on what the action is Let's start with simple actions Move in a given direction Remember a specific vector Send a particular value as input into a particular cognitive system Example: State $s$ is 2-dimensional Four actions (A, B, C, D) Do action A if $s$ is near [1,0], B if near [-1,0], C if near [0,1], D if near [0,-1] $Q(s, a_A)=s \cdot [1,0]$ $Q(s, a_B)=s \cdot [-1,0]$ $Q(s, a_C)=s \cdot [0,1]$ $Q(s, a_D)=s \cdot [0,-1]$ To do Action A, set $m=[1,0]$ To do Action B, set $m=[-1,0]$ To do Action C, set $m=[0,1]$ To do Action D, set $m=[0,-1]$
%pylab inline import nengo from nengo.dists import Uniform model = nengo.Network(label='Selection') D=4 with model: stim = nengo.Node([0,0]) s = nengo.Ensemble(200, dimensions=2) Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(stim, s) nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]]) Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), encoders=Uniform(1,1)) bias = nengo.Node([1]*D) nengo.Connection(bias, Action.input) nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) nengo.Connection(Qs.output, basal_ganglia.input, synapse=None) nengo.Connection(basal_ganglia.output, Action.input) motor = nengo.Ensemble(100, dimensions=2) nengo.Connection(Action.output[0], motor, transform=[[1],[0]]) nengo.Connection(Action.output[1], motor, transform=[[-1],[0]]) nengo.Connection(Action.output[2], motor, transform=[[0],[1]]) nengo.Connection(Action.output[3], motor, transform=[[0],[-1]]) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_good3.py.cfg")
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
What about more complex actions? Consider a simple creature that goes where it's told, or runs away if it's scared Action 1: set $m$ to the direction it's told to do Action 2: set $m$ to the direction we started from Need to pass information from one group of neurons to another But only do this when the action is chosen How? Well, let's use a function $m = a \times d$ where $a$ is the action selection (0 for not selected, 1 for selected) Let's try that with the creature
%pylab inline import nengo from nengo.dists import Uniform model = nengo.Network('Creature') with model: stim = nengo.Node([0,0], label='stim') command = nengo.Ensemble(100, dimensions=2, label='command') motor = nengo.Ensemble(100, dimensions=2, label='motor') position = nengo.Ensemble(1000, dimensions=2, label='position') scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction') def negative(x): return -x[0], -x[1] nengo.Connection(position, scared_direction, function=negative) nengo.Connection(position, position, synapse=.05) def rescale(x): return x[0]*0.1, x[1]*0.1 nengo.Connection(motor, position, function=rescale) nengo.Connection(stim, command) D=4 Q_input = nengo.Node([0,0,0,0], label='select') Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(Q_input, Qs.input) Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), encoders=Uniform(1,1)) bias = nengo.Node([1]*D) nengo.Connection(bias, Action.input) nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) nengo.Connection(Qs.output, basal_ganglia.input, synapse=None) nengo.Connection(basal_ganglia.output, Action.input) do_command = nengo.Ensemble(300, dimensions=3, label='do command') nengo.Connection(command, do_command[0:2]) nengo.Connection(Action.output[0], do_command[2]) def apply_command(x): return x[2]*x[0], x[2]*x[1] nengo.Connection(do_command, motor, function=apply_command) do_scared = nengo.Ensemble(300, dimensions=3, label='do scared') nengo.Connection(scared_direction, do_scared[0:2]) nengo.Connection(Action.output[1], do_scared[2]) nengo.Connection(do_scared, motor, function=apply_command) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_creature.py.cfg") #first dimensions activates do_command, i.e. go in the indicated direciton #second dimension activates do_scared, i.e. return 'home' (0,0) #creature tracks the position it goes to (by integrating) #creature inverts direction to position via scared direction/do_scared and puts that into motor
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
There's also another way to do this A special case for forcing a function to go to zero when a particular group of neurons is active
%pylab inline import nengo from nengo.dists import Uniform model = nengo.Network('Creature') with model: stim = nengo.Node([0,0], label='stim') command = nengo.Ensemble(100, dimensions=2, label='command') motor = nengo.Ensemble(100, dimensions=2, label='motor') position = nengo.Ensemble(1000, dimensions=2, label='position') scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction') def negative(x): return -x[0], -x[1] nengo.Connection(position, scared_direction, function=negative) nengo.Connection(position, position, synapse=.05) def rescale(x): return x[0]*0.1, x[1]*0.1 nengo.Connection(motor, position, function=rescale) nengo.Connection(stim, command) D=4 Q_input = nengo.Node([0,0,0,0], label='select') Qs = nengo.networks.EnsembleArray(50, n_ensembles=4) nengo.Connection(Q_input, Qs.input) Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), encoders=Uniform(1,1)) bias = nengo.Node([1]*D) nengo.Connection(bias, Action.input) nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) nengo.Connection(Qs.output, basal_ganglia.input, synapse=None) nengo.Connection(basal_ganglia.output, Action.input) do_command = nengo.Ensemble(300, dimensions=2, label='do command') nengo.Connection(command, do_command) nengo.Connection(Action.output[1], do_command.neurons, transform=-np.ones([300,1])) nengo.Connection(do_command, motor) do_scared = nengo.Ensemble(300, dimensions=2, label='do scared') nengo.Connection(scared_direction, do_scared) nengo.Connection(Action.output[0], do_scared.neurons, transform=-np.ones([300,1])) nengo.Connection(do_scared, motor) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_creature2.py.cfg")
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
This is a situation where it makes sense to ignore the NEF! All we want to do is shut down the neural activity So just do a very inhibitory connection The Cortex-Basal Ganglia-Thalamus loop We now have everything we need for a model of one of the primary structures in the mammalian brain Basal ganglia: action selection Thalamus: action execution Cortex: everything else <img src="lecture_selection/ctx-bg-thal.png" width="800"> We build systems in cortex that give some input-output functionality We set up the basal ganglia and thalamus to make use of that functionality appropriately Example Cortex stores some state (integrator) Add some state transition rules If in state A, go to state B If in state B, go to state C If in state C, go to state D ... For now, let's just have states A, B, C, D, etc be some randomly chosen vectors $Q(s, a_i) = s \cdot a_i$ The effect of each action is to input the corresponding vector into the integrator This is the basic loop of the SPA, so we can use that module
%pylab inline import nengo from nengo import spa D = 16 def start(t): if t < 0.05: return 'A' else: return '0' model = spa.SPA(label='Sequence_Module', seed=5) with model: model.cortex = spa.Buffer(dimensions=D, label='cortex') model.input = spa.Input(cortex=start, label='input') actions = spa.Actions( 'dot(cortex, A) --> cortex = B', 'dot(cortex, B) --> cortex = C', 'dot(cortex, C) --> cortex = D', 'dot(cortex, D) --> cortex = E', 'dot(cortex, E) --> cortex = A' ) model.bg = spa.BasalGanglia(actions=actions) model.thal = spa.Thalamus(model.bg) cortex = nengo.Probe(model.cortex.state.output, synapse=0.01) actions = nengo.Probe(model.thal.actions.output, synapse=0.01) utility = nengo.Probe(model.bg.input, synapse=0.01) sim = nengo.Simulator(model) sim.run(0.5) from nengo_gui.ipython import IPythonViz IPythonViz(model, "configs/bg_alphabet.py.cfg") fig = figure(figsize=(12,8)) p1 = fig.add_subplot(3,1,1) p1.plot(sim.trange(), model.similarity(sim.data, cortex)) p1.legend(model.get_output_vocab('cortex').keys, fontsize='x-small') p1.set_ylabel('State') p2 = fig.add_subplot(3,1,2) p2.plot(sim.trange(), sim.data[actions]) p2_legend_txt = [a.effect for a in model.bg.actions.actions] p2.legend(p2_legend_txt, fontsize='x-small') p2.set_ylabel('Action') p3 = fig.add_subplot(3,1,3) p3.plot(sim.trange(), sim.data[utility]) p3_legend_txt = [a.condition for a in model.bg.actions.actions] p3.legend(p3_legend_txt, fontsize='x-small') p3.set_ylabel('Utility') fig.subplots_adjust(hspace=0.2)
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Behavioural Evidence Is there any evidence that this is the way it works in brains? Consistent with anatomy/connectivity What about behavioural evidence? A few sources of support Timing data How long does it take to do an action? There are lots of existing computational (non-neural) cognitive models that have something like this action selection loop Usually all-symbolic A set of IF-THEN rules e.g. ACT-R Used to model mental arithmetic, driving a car, using a GUI, air-traffic control, staffing a battleship, etc etc Best fit across all these situations is to set the loop time to 50ms How long does this model take? Notice that all the timing is based on neural properties, not the algorithm Dominated by the longer neurotransmitter time constants in the basal ganglia <img src="files/lecture_selection/timing-simple.png"> <center>Simple actions</center> <img src="files/lecture_selection/timing-complex.png"> <center>Complex actions (routing)</center> This is in the right ballpark But what about this distinction between the two types of actions? Not a distinction made in the literature But once we start looking for it, there is evidence Resolves an outstanding weirdness where some actions seem to take twice as long as others Starting to be lots of citations for 40ms for simple tasks Task artifacts and strategic adaptation in the change signal task This is a nice example of the usefulness of making neural models! This distinction wasn't obvious from computational implementations More complex tasks Lots of complex tasks can be modelled this way Some basic cognitive components (cortex) action selection system (basal ganglia and thalamus) The tricky part is figuring out the actions Example: the Tower of Hanoi task 3 pegs N disks of different sizes on the pegs move from one configuration to another can only move one disk at a time no larger disk can be on a smaller disk <img src="files/lecture_selection/hanoi.png"> can we build rules to do this?
from IPython.display import YouTubeVideo YouTubeVideo('sUvHCs5y0o8', width=640, height=390, loop=1, autoplay=0)
SYDE 556 Lecture 9 Action Selection.ipynb
celiasmith/syde556
gpl-2.0
Read in the surveys
all_survey = pd.read_csv("../data/schools/survey_all.txt", delimiter="\t", encoding='windows-1252') d75_survey = pd.read_csv("../data/schools/survey_d75.txt", delimiter="\t", encoding='windows-1252') survey = pd.concat([all_survey, d75_survey], axis=0) survey["DBN"] = survey["dbn"] survey_fields = [ "DBN", "rr_s", "rr_t", "rr_p", "N_s", "N_t", "N_p", "saf_p_11", "com_p_11", "eng_p_11", "aca_p_11", "saf_t_11", "com_t_11", "eng_t_10", "aca_t_11", "saf_s_11", "com_s_11", "eng_s_11", "aca_s_11", "saf_tot_11", "com_tot_11", "eng_tot_11", "aca_tot_11", ] survey = survey.loc[:,survey_fields] data["survey"] = survey
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Convert columns to numeric
cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score'] for c in cols: data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce") data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]] def find_lat(loc): coords = re.findall("\(.+, .+\)", loc) lat = coords[0].split(",")[0].replace("(", "") return lat def find_lon(loc): coords = re.findall("\(.+, .+\)", loc) lon = coords[0].split(",")[1].replace(")", "").strip() return lon data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat) data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon) data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce") data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Condense datasets
class_size = data["class_size"] class_size = class_size[class_size["GRADE "] == "09-12"] class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"] class_size = class_size.groupby("DBN").agg(np.mean) class_size.reset_index(inplace=True) data["class_size"] = class_size data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012] data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"] data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Convert AP scores to numeric
cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5'] for col in cols: data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Find correlations
correlations = combined.corr() correlations = correlations["sat_score"] correlations = correlations.dropna() correlations.sort_values(ascending=False, inplace=True) # Interesting correlations tend to have r value > .25 or < -.25 interesting_correlations = correlations[abs(correlations) > 0.25] print(interesting_correlations) # Setup Matplotlib to work in Jupyter notebook %matplotlib inline import matplotlib.pyplot as plt
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Survey Correlations
# Make a bar plot of the correlations between survey fields and sat_score correlations[survey_fields].plot.bar(figsize=(9,7))
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
From the survey fields, two stand out due to their significant positive correlations: * N_s - Number of student respondents * N_p - Number of parent respondents * aca_s_11 - Academic expectations score based on student responses * saf_s_11 - Safety and Respect score based on student responses Why are some possible reasons that N_s and N_p could matter? 1. Higher numbers of students and parents responding to the survey may be an indicator that students and parents care more about the school and about academics in general. 1. Maybe larger schools do better on the SAT and higher numbers of respondents is just indicative of a larger overall student population. 1. Maybe there is a hidden underlying correlation, say that rich students/parents or white students/parents are more likely to both respond to surveys and to have the students do well on the SAT. 1. Maybe parents who care more will fill out the surveys and get their kids to fill out the surveys and these same parents will push their kids to study for the SAT. Safety and SAT Scores Both student and teacher perception of safety and respect at school correlate significantly with SAT scores. Let's dig more into this relationship.
# Make a scatterplot of the saf_s_11 column vs the sat-score in combined combined.plot.scatter(x='sat_score', y='saf_s_11', figsize=(9,5))
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
So a high saf_s_11 student safety and respect score doesn't really have any predictive value regarding SAT score. However, a low saf_s_11 has a very strong correlation with low SAT scores. Map out Safety Scores
# Find the average values for each column for each school_dist in combined districts = combined.groupby('school_dist').agg(np.mean) # Reset the index of districts, making school_dist a column again districts.reset_index(inplace=True) # Make a map that shows afety scores by district from mpl_toolkits.basemap import Basemap plt.figure(figsize=(8,8)) # Setup the Matplotlib Basemap centered on New York City m = Basemap(projection='merc', llcrnrlat=40.496044, urcrnrlat=40.915256, llcrnrlon=-74.255735, urcrnrlon=-73.700272, resolution='i') m.drawmapboundary(fill_color='white') m.drawcoastlines(color='blue', linewidth=.4) m.drawrivers(color='blue', linewidth=.4) # Convert the lat and lon columns of districts to lists longitudes = districts['lon'].tolist() latitudes = districts['lat'].tolist() # Plot the locations m.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True, c=districts['saf_s_11'], cmap='summer') # Add colorbar # add colorbar. cbar = m.colorbar(location='bottom',pad="5%") cbar.set_label('saf_s_11')
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
So it looks like the safest schools are in Manhattan, while the least safe schools are in Brooklyn. This jives with crime statistics by borough Race and SAT Scores There are a few columsn that indicate the percentage of each race at a given school: * white_per * asian_per * black_per * hispanic_per By plotting out the correlations between these columns and sat_score, we can see if there are any racial differences in SAT performance.
# Make a plot of the correlations between racial cols and sat_score race_cols = ['white_per', 'asian_per', 'black_per', 'hispanic_per'] race_corr = correlations[race_cols] race_corr.plot(kind='bar')
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
A higher percentage of white and asian students correlates positively with SAT scores and a higher percentage of black or hispanic students correlates negatively with SAT scores. I wouldn't say any of this is suprising. My guess would be that there is an underlying economic factor which is the cause - white and asian neighborhoods probably have a higher median household income and more well funded schools than black or hispanic neighborhoods.
# Explore schools with low SAT scores and a high hispanic_per combined.plot.scatter(x='hispanic_per', y='sat_score')
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
The above scatterplot shows that a low hispanic percentage isn't particularly predictive of SAT score. However, a high hispanic percentage is highly predictive of a low SAT score.
# Research any schools with a greater than 95% hispanic_per high_hispanic = combined[combined['hispanic_per'] > 95] # Find the names of schools from the data high_hispanic['SCHOOL NAME']
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
The above schools appear to contain a lot of international schools focused on recent immigrants who are learning English as a 2nd language. It makes sense that they would have a harder time on the SAT which is given soley in English.
# Research any schools with less than 10% hispanic_per and greater than # 1800 average SAT score high_sat_low_hispanic = combined[(combined['hispanic_per'] < 10) & (combined['sat_score'] > 1800)] high_sat_low_hispanic['SCHOOL NAME']
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Most of the schools above appear to be specialized science and technology schools which receive extra funding and require students to do well on a standardized test before being admitted. So it is reasonable that students at these schools would have a high average SAT score. Gender and SAT Scores There are two columns that indicate the percentage of each gender at a school: * male_per * female_per
# Investigate gender differences in SAT scores gender_cols = ['male_per', 'female_per'] gender_corr = correlations[gender_cols] gender_corr # Make a plot of the gender correlations gender_corr.plot.bar()
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong. More data would be required before I was wiling to say that this is a significant effect.
# Investigate schools with high SAT scores and a high female_per combined.plot.scatter(x='female_per', y='sat_score')
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
The above plot appears to show that either very low or very high percentage of females in a school leads to a low average SAT score. However, a percentage in the range 40 to 80 or so can lead to good scores. There doesn't appear to be a strong overall correlation.
# Research any schools with a greater than 60% female_per, and greater # than 1700 average SAT score. high_female_high_sat = combined[(combined['female_per'] > 60) & (combined['sat_score'] > 1700)] high_female_high_sat['SCHOOL NAME']
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
These schools appears to be very selective liberal arts schools that have high academic standards. AP Scores vs SAT Scores The Advanced Placement (AP) exams are exams that high schoolers take in order to gain college credit. AP exams can be taken in many different subjects, and passing the AP exam means that colleges may grant you credits. It makes sense that the number of students who took the AP exam in a school and SAT scores would be highly correlated. Let's dig into this relationship more. Since total_enrollment is highly correlated with sat_score, we don't want to bias our results, so we'll instead look at the percentage of students in each school who took at least one AP exam.
# Compute the percentage of students in each school that took the AP exam combined['ap_per'] = combined['AP Test Takers '] / combined['total_enrollment'] # Investigate the relationship between AP scores and SAT scores combined.plot.scatter(x='ap_per', y='sat_score')
dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb
tleonhardt/CodingPlayground
mit
Below is an example for how to get precision of your model Attention : You need to finish one line of code to implement the whole example.
#Let's load iris data again iris = datasets.load_iris() X = iris.data y = iris.target # Let's split the data to training and testing data. random_state = np.random.RandomState(0) n_samples, n_features = X.shape X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # Limit to the two first classes, and split into training and test X_train, X_test, y_train, y_test = train_test_split(X[y < 2], y[y < 2], test_size=.5, random_state=random_state) # Create a simple classifier classifier = svm.LinearSVC(random_state=random_state) # How could we fit the model? Please find your solutions from our example, and write down your code to fit the svm model # from training data. # After you have fit the model, then we make predicions. y_score = classifier.decision_function(X_test)
notebooks/machinelearning/precisionrecall.ipynb
cloudmesh/book
apache-2.0
Get the average precision score, Run the cell below
from sklearn.metrics import average_precision_score average_precision = average_precision_score(y_test, y_score) print('Average precision-recall score: {0:0.2f}'.format( average_precision))
notebooks/machinelearning/precisionrecall.ipynb
cloudmesh/book
apache-2.0
Train Deep Learning model and validate on test set LeNET 1989 In this demo you will learn how to use a simple LeNET Model using TensorFlow. Using the LeNET model architecture for training in H2O We are ready to start the training procedure.
from h2o.estimators.deepwater import H2ODeepWaterEstimator lenet_model = H2ODeepWaterEstimator( epochs=10, learning_rate=1e-3, mini_batch_size=64, network='lenet', image_shape=[28,28], problem_type='dataset', ## Not 'image' since we're not passing paths to image files, but raw numbers ignore_const_cols=False, ## We need to keep all 28x28=784 pixel values, even if some are always 0 channels=1, backend="tensorflow" ) lenet_model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df) error = lenet_model.model_performance(valid=True).mean_per_class_error() print "model error:", error
examples/deeplearning/notebooks/deeplearning_tensorflow_mnist.ipynb
mathemage/h2o-3
apache-2.0
ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6).
import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)]
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Below I'm running images through the VGG network in batches.
# Set the batch size higher if you can fit in in your GPU memory batch_size = 100 codes_list = [] labels = [] batch = [] codes = None vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) with tf.Session() as sess: for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels)
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit(labels) labels_vecs = lb.transform(labels) labels_vecs
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so: ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) Then split the data with splitter = ss.split(x, y) ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices.
from sklearn.model_selection import StratifiedShuffleSplit sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0) for train_index, test_index in sss.split(codes, labels_vecs): train_x, rest_x = codes[train_index], codes[test_index] train_y, rest_y = labels_vecs[train_index], labels_vecs[test_index] sss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=0) for train_index, test_index in sss.split(rest_x, rest_y): val_x, test_x = rest_x[train_index], rest_x[test_index] val_y, test_y = rest_y[train_index], rest_y[test_index] print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape)
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
from tensorflow import layers inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) fc = tf.layers.dense(inputs_, 2000, activation=tf.nn.relu) logits = tf.layers.dense(fc, labels_vecs.shape[1], activation=None) cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)) optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost) # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Training Here, we'll train the network.
epochs = 10 batches = 100 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): b = 0 for x, y in get_batches(train_x, train_y, batches): feed = {inputs_: x, labels_: y} batch_cost, _ = sess.run([cost, optimizer], feed_dict=feed) print("Epoch: {}/{} ".format(e+1, epochs), "Batch: {}/{} ".format(b+1, batches), "Training loss: {:.4f}".format(batch_cost)) b += 1 saver.save(sess, "checkpoints/flowers.ckpt")
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
test_img_path = 'flower_photos/daisy/144603918_b9de002f60_m.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) batch = [] with tf.Session() as sess: img = utils.load_image(test_img_path) batch.append(img.reshape((1, 224, 224, 3))) images = np.concatenate(batch) feed_dict = {input_: images} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_)
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Find photos that were mistakenly calassified
data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] with tf.Session() as sess: saver = tf.train.Saver() with tf.Session() as sess2: saver.restore(sess2, tf.train.latest_checkpoint('checkpoints')) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for file in files: batch = [] labels = [] img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(lb.transform([each])[0]) images = np.concatenate(batch) feed_dict = {input_: images} code = sess.run(vgg.relu6, feed_dict=feed_dict) feed = {inputs_: code, labels_: labels} correct, prediction = sess2.run([correct_pred, predicted], feed_dict=feed) if not correct[0]: #test_img = imread(os.path.join(class_path, file)) #plt.imshow(test_img) #plt.barh(np.arange(5), prediction) #_ = plt.yticks(np.arange(5), lb.classes_) print(os.path.join(class_path, file))
08_transfer-learning/Transfer_Learning.ipynb
adrianstaniec/deep-learning
mit
Exceptions An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. You've already seen some exceptions: - syntax errors - divide by 0 Many programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case. It may also be that programs will generate exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return. Catching Exceptions Python provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a "try" and "except" statement.
def divide1(numerator, denominator): try: result = numerator/denominator print("result = %f" % result) except: print("You can't divide by 0!!") divide1(1.0, 2) divide1(1.0, 0) divide1("x", 2)
Spring2018/Debugging-and-Exceptions/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause