markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Method 1: update using the info in gradientThis means we will update the image based on the value of gradient, ideally, this will give us a adversarial image with less wiggle, as we only need to add a little wiggle when the gradient at that point is large.
adversarial_img = origin_images.copy() for i in range(0, iter_num): gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0}) adversarial_img = adversarial_img - eta * gradient prediction=tf.argmax(y_pred,1) prediction_val = prediction.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) print("predictions", prediction_val) probabilities=y_pred probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) print('Confidence 2:', probabilities_val[:, 2]) print('Confidence 6:', probabilities_val[:, 6]) print('-----------------------------------')
predictions [2 2 2 2 2 2 2 2 2 2] Confidence 2: [ 0.99839801 0.50398463 0.99999976 0.94279677 0.99306434 0.99999869 0.99774051 0.99999976 0.99999988 0.99998116] Confidence 6: [ 6.17733331e-09 3.38034965e-02 3.61205510e-11 5.49222386e-05 1.65044228e-04 2.51908945e-11 4.98797135e-07 3.61205510e-11 8.44649004e-11 1.06398193e-06] ----------------------------------- predictions [2 6 2 2 6 2 2 2 2 2] Confidence 2: [ 0.90054828 0.03599812 0.99992478 0.47941697 0.3857542 0.99992812 0.88223279 0.99992478 0.99999475 0.99883395] Confidence 6: [ 5.24239840e-06 9.09998178e-01 3.14857857e-07 1.03679458e-02 4.14035559e-01 2.03342374e-08 7.65050703e-04 3.14857573e-07 9.70845377e-08 6.13783835e-04] ----------------------------------- predictions [3 6 2 6 6 2 2 2 2 2] Confidence 2: [ 0.20391738 0.02125967 0.99488431 0.12929185 0.01710233 0.99819332 0.36685336 0.99488431 0.99973804 0.86787164] Confidence 6: [ 5.72559598e-04 9.47188795e-01 2.24302203e-04 3.12704206e-01 9.43210959e-01 3.14465137e-06 7.00001568e-02 2.24301548e-04 6.08862283e-05 1.23816974e-01] ----------------------------------- predictions [8 6 2 6 6 2 6 2 2 6] Confidence 2: [ 0.43293276 0.01552619 0.83097196 0.03268598 0.0135146 0.98310214 0.17826064 0.83097178 0.97425836 0.11591232] Confidence 6: [ 1.79927237e-02 9.61492419e-01 3.42250541e-02 7.99241543e-01 9.55691159e-01 1.36969538e-04 6.16287053e-01 3.42250690e-02 1.96619965e-02 8.76042128e-01] ----------------------------------- predictions [3 6 6 6 6 2 6 6 6 6] Confidence 2: [ 0.17021255 0.01231071 0.19562197 0.01843761 0.01121253 0.88237929 0.04999156 0.19562216 0.23194622 0.06901591] Confidence 6: [ 0.28051642 0.9694531 0.53274441 0.88252693 0.96344072 0.00382947 0.86769354 0.53274429 0.73012829 0.9247852 ] ----------------------------------- predictions [6 6 6 6 6 2 6 6 6 6] Confidence 2: [ 0.07458363 0.01019469 0.06034603 0.01337874 0.00959486 0.66686749 0.03255163 0.06034593 0.07704844 0.05089864] Confidence 6: [ 0.72089374 0.974684 0.84580153 0.91406661 0.96881437 0.0405265 0.91041219 0.84580171 0.89538473 0.94383085] ----------------------------------- predictions [6 6 6 6 6 2 6 6 6 6] Confidence 2: [ 0.03893126 0.00872765 0.03884212 0.01059401 0.00841283 0.46066824 0.02436219 0.0388421 0.05182601 0.04104275] Confidence 6: [ 0.84897608 0.97832572 0.89983678 0.9321211 0.9727276 0.18205585 0.93081117 0.8998369 0.92495954 0.95425797] ----------------------------------- predictions [6 6 6 6 6 6 6 6 6 6] Confidence 2: [ 0.02573399 0.00763769 0.02883839 0.0087844 0.00748532 0.29014409 0.01946484 0.02883845 0.03953246 0.03457938] Confidence 6: [ 0.89540702 0.98103446 0.92485535 0.9435631 0.97574246 0.44339713 0.94352108 0.92485535 0.94018751 0.9611299 ] ----------------------------------- predictions [6 6 6 6 6 6 6 6 6 6] Confidence 2: [ 0.01902132 0.00679732 0.02307542 0.00752009 0.00675084 0.18342426 0.01634321 0.0230754 0.03184611 0.02982386] Confidence 6: [ 0.9198994 0.983105 0.93942189 0.95158327 0.97813272 0.62893689 0.95190406 0.93942195 0.9500286 0.96620733] ----------------------------------- predictions [6 6 6 6 6 6 6 6 6 6] Confidence 2: [ 0.01520571 0.00613233 0.0192919 0.00655318 0.0061516 0.13245167 0.01406015 0.01929193 0.02656174 0.02627148] Confidence 6: [ 0.93462354 0.98475128 0.94931847 0.95763385 0.98007178 0.73152864 0.95811945 0.94931847 0.95700026 0.97002554] -----------------------------------
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Method 2: update using the sign of gradientperform some step size for each pixel
eta = 0.02 iter_num = 10 adversarial_img = origin_images.copy() for i in range(0, iter_num): gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0}) adversarial_img = adversarial_img - eta * np.sign(gradient) prediction=tf.argmax(y_pred,1) prediction_val = prediction.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) print("predictions", prediction_val) probabilities=y_pred probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) print('Confidence 2:', probabilities_val[:, 2]) print('Confidence 6:', probabilities_val[:, 6]) print('-----------------------------------')
predictions [2 2 2 2 2 2 2 2 2 2] Confidence 2: [ 0.99979955 0.86275303 1. 0.9779107 0.99902475 0.99999976 0.99971646 1. 1. 0.99999583] Confidence 6: [ 1.66726910e-10 1.24624989e-03 4.56519967e-13 8.34497041e-06 5.59669525e-06 1.79199841e-12 1.30735716e-08 4.56519967e-13 3.46567068e-12 6.27776799e-08] ----------------------------------- predictions [2 2 2 2 2 2 2 2 2 2] Confidence 2: [ 0.99511552 0.40977556 0.99999964 0.85962117 0.98393112 0.99999559 0.99609464 0.99999964 0.99999964 0.99994993] Confidence 6: [ 2.01981152e-08 8.79419371e-02 1.22339749e-10 4.89167869e-04 1.19251851e-03 1.82640972e-10 1.73009698e-06 1.22339749e-10 5.76917680e-10 6.33407490e-06] ----------------------------------- predictions [2 6 2 2 2 2 2 2 2 2] Confidence 2: [ 0.92691237 0.0824458 0.99998283 0.54052806 0.69164306 0.99994981 0.94957453 0.99998283 0.99999595 0.99876642] Confidence 6: [ 1.97517147e-06 7.88923085e-01 2.59027715e-08 1.52549399e-02 1.51991054e-01 1.05832694e-08 1.59343646e-04 2.59027715e-08 7.01664717e-08 5.28034056e-04] ----------------------------------- predictions [3 6 2 6 6 2 2 2 2 2] Confidence 2: [ 0.38114282 0.00284192 0.99941409 0.21674696 0.04668415 0.99948311 0.68562496 0.99941409 0.99993396 0.96271199] Confidence 6: [ 8.61597146e-05 9.92703676e-01 5.69670192e-06 2.89392889e-01 8.71554732e-01 4.64192766e-07 6.55736076e-03 5.69670192e-06 6.37889843e-06 3.00177168e-02] ----------------------------------- predictions [2 6 2 6 6 2 2 2 2 6] Confidence 2: [ 5.83209932e-01 6.27083209e-05 9.90212023e-01 2.70510484e-02 2.11280608e-03 9.95150447e-01 3.76711369e-01 9.90212023e-01 9.98733342e-01 4.64150667e-01] Confidence 6: [ 2.44543725e-03 9.99762475e-01 3.85647581e-04 8.70872498e-01 9.93551373e-01 1.34517468e-05 1.35343209e-01 3.85647581e-04 4.81195719e-04 5.04597306e-01] ----------------------------------- predictions [3 6 2 6 6 2 6 2 2 6] Confidence 2: [ 1.45977870e-01 2.26086172e-06 8.54788423e-01 2.14479375e-03 8.69234063e-05 9.71471608e-01 1.03391998e-01 8.54788423e-01 9.68404591e-01 4.15184237e-02] Confidence 6: [ 3.94732542e-02 9.99990463e-01 1.52496705e-02 9.87855494e-01 9.99670744e-01 2.56853382e-04 7.45402575e-01 1.52496705e-02 2.36869231e-02 9.47378218e-01] ----------------------------------- predictions [6 6 2 6 6 2 6 2 2 6] Confidence 2: [ 2.31417045e-01 1.05129189e-07 3.71916145e-01 1.65524441e-04 4.47992488e-06 8.64461243e-01 6.83465134e-03 3.71916145e-01 5.43019056e-01 2.49437825e-03] Confidence 6: [ 0.3545565 0.9999994 0.22301799 0.99881208 0.99998033 0.00355855 0.98034912 0.22301799 0.42559034 0.99609852] ----------------------------------- predictions [6 6 6 6 6 2 6 6 6 6] Confidence 2: [ 1.95937138e-02 6.35231245e-09 7.78834969e-02 2.18999739e-05 2.25597717e-07 5.93729377e-01 3.56450648e-04 7.78834969e-02 3.73114012e-02 1.58468843e-04] Confidence 6: [ 0.85764623 1. 0.81097031 0.99987864 0.99999869 0.03135163 0.99828064 0.81097031 0.94914585 0.9996804 ] ----------------------------------- predictions [6 6 6 6 6 2 6 6 6 6] Confidence 2: [ 2.98802019e-03 4.13267927e-08 6.95284083e-03 2.13227167e-06 1.25024888e-08 4.91525024e-01 7.30973698e-05 6.95284083e-03 1.61215290e-03 1.72482469e-05] Confidence 6: [ 0.98444527 1. 0.98080987 0.99998796 1. 0.19622776 0.99981946 0.98080987 0.99635267 0.99996758] ----------------------------------- predictions [6 6 6 6 6 6 6 6 6 6] Confidence 2: [ 2.74159829e-04 2.28510810e-09 5.19630907e-04 2.98820567e-07 8.52226556e-09 2.46330112e-01 4.67527661e-06 5.19630907e-04 1.09362918e-04 1.23258530e-06] Confidence 6: [ 0.99770629 1. 0.99812537 0.99999869 1. 0.58065033 0.99997211 0.99812537 0.99967241 0.99999702] -----------------------------------
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Take a look at individual image
threshold = 0.99 eta = 0.001 prediction=tf.argmax(y_pred,1) probabilities=y_pred adversarial_img = origin_images[1: 2].copy() adversarial_label = target_labels[1: 2] start_img = adversarial_img.copy() confidence = 0 iter_num = 0 prob_history = list() while confidence < threshold: gradient = img_gradient.eval({x: adversarial_img, y_: adversarial_label, keep_prob: 1.0}) adversarial_img -= eta * np.sign(gradient) probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) confidence = probabilities_val[:, 6] prob_history.append(probabilities_val[0]) iter_num += 1 print(iter_num) sns.set_style('whitegrid') prob_history = np.array(prob_history) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) for i, record in enumerate(prob_history.T): plt.plot(record, color=colors_list[i]) ax.legend([str(x) for x in range(0, 10)], loc='center left', bbox_to_anchor=(1.05, 0.5), fontsize=14) ax.set_xlabel('Iteration') ax.set_ylabel('Prediction Confidence') sns.set_style('white') fig = plt.figure(figsize=(9, 4)) ax1 = fig.add_subplot(1,3,1) ax1.axis('off') ax1.imshow(start_img.reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax1.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[0][2]) + '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[0][6])) ax2 = fig.add_subplot(1,3,2) ax2.axis('off') ax2.imshow((adversarial_img - start_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax2.title.set_text('Delta') ax3 = fig.add_subplot(1,3,3) ax3.axis('off') ax3.imshow((adversarial_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax3.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[-1][2]) + '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[-1][6])) plt.show() print("Difference Measure:", np.sum((adversarial_img - start_img) ** 2)) eta = 0.01 prediction=tf.argmax(y_pred,1) probabilities=y_pred adversarial_img = origin_images[1: 2].copy() adversarial_label = target_labels[1: 2] start_img = adversarial_img.copy() confidence = 0 iter_num = 0 prob_history = list() while confidence < threshold: gradient = img_gradient.eval({x: adversarial_img, y_: adversarial_label, keep_prob: 1.0}) adversarial_img -= eta * gradient probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess) confidence = probabilities_val[:, 6] prob_history.append(probabilities_val[0]) iter_num += 1 print(iter_num) sns.set_style('white') fig = plt.figure(figsize=(9, 4)) ax1 = fig.add_subplot(1,3,1) ax1.axis('off') ax1.imshow(start_img.reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax1.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[0][2]) + '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[0][6])) ax2 = fig.add_subplot(1,3,2) ax2.axis('off') ax2.imshow((adversarial_img - start_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax2.title.set_text('Delta') ax3 = fig.add_subplot(1,3,3) ax3.axis('off') ax3.imshow((adversarial_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray) ax3.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[-1][2]) + '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[-1][6])) plt.show() print("Difference Measure:", np.sum((adversarial_img - start_img) ** 2)) sns.set_style('whitegrid') prob_history = np.array(prob_history) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) for i, record in enumerate(prob_history.T): plt.plot(record, color=colors_list[i]) ax.legend([str(x) for x in range(0, 10)], loc='center left', bbox_to_anchor=(1.05, 0.5), fontsize=14) ax.set_xlabel('Iteration') ax.set_ylabel('Prediction Confidence')
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Chapter Break
from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(inputs, output, test_size=0.33, random_state=42) from sklearn.preprocessing import PolynomialFeatures, StandardScaler from sklearn.pipeline import make_pipeline pipe = make_pipeline(LinearRegression()) pipe.fit(X_train, y_train) pipe.score(X_train, y_train) pipe.score(X_test, y_test) from sklearn.linear_model import Ridge # tactic 1: minimize weights, smaller the better, higher penalty on large weights # = ridge regression pipe = make_pipeline(StandardScaler(), PolynomialFeatures(degree=3), Ridge()) pipe.fit(X_train, y_train) pipe.steps[2][1].coef_ pipe.steps[2][1].coef_.max(), pipe.steps[2][1].coef_.min(), pipe.steps[2][1].coef_.std() pipe.score(X_train, y_train) pipe.score(X_test, y_test) from sklearn.linear_model import Lasso # tactic 2: minimize number of non-zero weights # = Lasso pipe = make_pipeline(StandardScaler(), PolynomialFeatures(degree=3), Lasso()) pipe.fit(X_train, y_train) pipe.score(X_train, y_train) pipe.score(X_test, y_test) pipe.steps[2][1].coef_ pipe.steps[2][1].coef_.max(), pipe.steps[2][1].coef_.min(), pipe.steps[2][1].coef_.std() from sklearn.linear_model import ElasticNet # tactic 3: mix lasso and ridge! # = elasticnet pipe = make_pipeline(StandardScaler(), PolynomialFeatures(degree=3), ElasticNet()) pipe.fit(X_train, y_train) pipe.score(X_train, y_train) pipe.score(X_test, y_test) pipe.steps[2][1].coef_ pipe.steps[2][1].coef_.max(), pipe.steps[2][1].coef_.min(), pipe.steps[2][1].coef_.std()
_____no_output_____
MIT
Chapter 4.ipynb
PacktPublishing/-Python-Your-First-Step-Toward-Data-Science-V-
Understanding regression and linear regression `np.concatenate` joins a sequence of arrays along an existing axis.`np.ones` returns a new array of given shape and type, filled with ones.`np.zeroes` return a new array of given shape and type, filled with zeroes.`np.dot` if a is an N-D array and b is a 1-D array, it is a *sum product over the last axis of a and b*.
learning_rate = 0.01 fit_intercept = True weights = 0 def fit(X, y): global weights if fit_intercept: X = np.concatenate((np.ones((X.shape[0], 1)), X), axis=1) weights = np.zeros(X.shape[1]) # gradient descent (there are other optimizations) for i in range(1000): # epochs current_prediction = np.dot(X, weights) # linear regression gradient = np.dot(X.T, (current_prediction - y)) / y.size # find the gradient weights -= learning_rate * gradient # modify the weights using the gradient def predict_prob(X): global weights if fit_intercept: X = np.concatenate((np.ones((X.shape[0], 1)), X), axis=1) return np.dot(X, weights)
_____no_output_____
MIT
Chapter 4.ipynb
PacktPublishing/-Python-Your-First-Step-Toward-Data-Science-V-
Notation:- SAL- small area- PP- police precinct- AEA- Albers Equal Area Conic- CPS- crime per SAL
from random import shuffle, randint import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from mpl_toolkits.basemap import Basemap from shapely.geometry import Polygon, Point, MultiPoint, MultiPolygon, LineString, mapping, shape from descartes import PolygonPatch import random import fiona import numpy as np import csv from fiona import collection import geopandas as gpd from geopandas.tools import sjoin # rtree index in-build, used with inner, intersection import pandas as pd from collections import defaultdict
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
def sjoin(left_df, right_df, how='inner', op='intersects', lsuffix='left', rsuffix='right', **kwargs): """Spatial join of two GeoDataFrames. left_df, right_df are GeoDataFrames how: type of join left -> use keys from left_df; retain only left_df geometry column right -> use keys from right_df; retain only right_df geometry column inner -> use intersection of keys from both dfs; retain only left_df geometry column op: binary predicate {'intersects', 'contains', 'within'} see http://toblerity.org/shapely/manual.htmlbinary-predicates lsuffix: suffix to apply to overlapping column names (left GeoDataFrame) rsuffix: suffix to apply to overlapping column names (right GeoDataFrame) """
def find_intersections(o): from collections import defaultdict paired_ind = [o.pp_index, o.sal_index] d_over_ind = defaultdict(list) # creating a dictionary that has prescints as keys and associated small areas as values for i in range(len(paired_ind[0].values)): if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection d_over_ind[paired_ind[0].values[i]].append(paired_ind[1].values[i]) # get rid of the pol precincts with no small areas associated to them- not the most efficient way d_temp = {} for l in d_over_ind: if len(d_over_ind[l]): d_temp[l] = d_over_ind[l] return d_temp def calculate_join_indices(g1_reind, g2_reind): # A: region of the police data with criminal record # C: small area with population data # we look for all small areas intersecting a given C_i, calculate the fraction of inclusion, scale the # population accordingly: area(A_j, where A_j crosses C_i)/area(A_j)* popul(A_j) # the actual indexing: out = sjoin(g1_reind, g2_reind, how ="inner", op = "intersects") out.drop('index_right', axis=1, inplace=True) # there is a double index fo smal areas, so we drop one #out_sorted = out.sort(columns='polPrecincts_index', ascending=True) # guess sorting is not necessary, cause we are # using doctionaries at later stages #dict_over_ind = find_intersections(out_sorted) # output retains only 1 area (left or right join), and gives no intersection area. # so we create an array with paired indices: police precincts with associated small areas # we use it in a loop in a function below dict_over_ind = find_intersections(out) return dict_over_ind def calculate_inclusion_indices(g1_reind, g2_reind): out = sjoin(g1_reind, g2_reind, op = "contains") ## PP contains SAL out.drop('index_right', axis=1, inplace=True) dict_over_ind = find_intersections(out) return dict_over_ind def calculate_join(dict_over_ind, g1_reind, g2_reind): area_total = 0 data_aggreg = [] # note to self: make sure to import shapely Polygon for index1, crim in g1_reind.iterrows(): try: index1 = crim.pp_index sals_found = dict_over_ind[index1] for sal in range(len(sals_found)): pom = g2_reind[g2_reind.sal_index == sals_found[sal]]['geometry'] #if pom.intersects(crim['geometry']).values[0]: area_int = pom.intersection(crim['geometry']).area.values[0] if area_int>0: area_total += area_int area_crim = crim['geometry'].area area_popu = pom.values[0].area popu_count = g2_reind[g2_reind.sal_index == sals_found[sal]]['PPL_CNT'].values[0] murd_count = crim['murd_cnt'] pol_province = crim['province'] popu_frac = (area_int / area_popu) * popu_count# fraction of the pop area contained inside the crim #print(popu_frac) extra_info_col_names = ['DC_NAME','MN_NAME','MP_NAME','PR_NAME','SP_NAME'] extra_info_col_codes = ['MN_CODE','MP_CODE','PR_CODE','SAL_CODE','SP_CODE'] extra_names = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_names]#.filter(regex=("NAME")) extra_codes = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_codes]#.filter(regex=("NAME")) data_aggreg.append({'geometry': pom.intersection(crim['geometry']).values[0], 'id1': index1,\ 'id2': sals_found[sal] ,'area_pp': area_crim,'area_sal': area_popu,\ 'area_inter': area_int, 'popu_inter' : popu_frac, 'popu_sal': popu_count,\ 'murd_cnt': murd_count,'province': pol_province, 'DC_NAME': extra_names.DC_NAME.values[0],\ 'MN_NAME': extra_names.MN_NAME.values[0], 'MP_NAME': extra_names.MP_NAME.values[0],\ 'PR_NAME': extra_names.PR_NAME.values[0],'SP_NAME': extra_names.SP_NAME.values[0],\ 'MN_CODE': extra_codes.MN_CODE.values[0],'MP_CODE': extra_codes.MP_CODE.values[0],\ 'PR_CODE': extra_codes.PR_CODE.values[0],'SAL_CODE': extra_codes.SAL_CODE.values[0],\ 'SP_CODE': extra_codes.SP_CODE.values[0]} ) except: pass df_t = gpd.GeoDataFrame(data_aggreg,columns=['geometry', 'id1','id2','area_pp',\ 'area_sal','area_inter', 'popu_inter',\ 'popu_sal', 'murd_cnt','province','DC_NAME',\ 'MN_NAME','MP_NAME','PR_NAME','SP_NAME',\ 'MN_CODE','MP_CODE','PR_CODE','SAL_CODE','SP_CODE']) #df_t.to_file(out_name) return df_t, area_total, data_aggreg # this function adds the remaining columns, calculates fractions etc def compute_final_col(df_temp): # add population data per police percinct to the main table # id1- PP, id2 - SAL temp = df_temp.groupby(by=['id1'])['popu_inter'].sum().reset_index() data_with_population = pd.merge(df_temp, temp, on='id1', how='outer')\ .rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'}) # finally, update the murder rate per SAL : id2 is sal's id data_with_population['murd_est_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\ * data_with_population['murd_cnt'] data_mur_per_int = data_with_population.groupby(by=['id2'])['murd_est_per_int'].sum().reset_index() data_mur_per_sal = data_mur_per_int.rename(columns={'murd_est_per_int':'murd_est_per_sal'}) data_with_population['ratio_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\ data_complete = pd.merge(data_with_population, data_mur_per_sal, on='id2', how='outer')\ .rename(columns={'id1':'index_PP', 'id2':'index_SAL'}) return data_complete
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Main functions to find intersection. Files loaded in are the AEA projected shapefiles.
salSHP_upd = 'shapefiles/updated/sal_population_aea.shp' polSHP_upd = 'shapefiles/updated/polPrec_murd2015_prov_aea.shp' geo_pol = gpd.GeoDataFrame.from_file(polSHP_upd) geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd) geo_pol_reind = geo_pol.reset_index().rename(columns={'index':'pp_index'}) geo_sal_reind = geo_sal.reset_index().rename(columns={'index':'sal_index'}) #dict_int = calculate_join_indices(geo_pol_reind,geo_sal_reind)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
test on a subset:
gt1= geo_pol_reind[geo_pol.province=="Free State"].head(n=2) gt2 = geo_sal_reind[geo_sal_reind.PR_NAME=="Free State"].reset_index() d = calculate_join_indices(gt1, gt2)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Running the intersections on pre-computed indices:
from timeit import default_timer as timer #start = timer() #df_inc, sum_area_inc, data_inc = calculate_join(dict_inc, geo_pol_reind, geo_sal_reind) #end = timer() #print("1st", end - start) start = timer() df_int, sum_area_int, data_int = calculate_join(dict_int, geo_pol_reind, geo_sal_reind) end = timer() print("2nd", end - start)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
find pol precincts within WC boundary
za_province = gpd.read_file('za-provinces.topojson',driver='GeoJSON')#.set_index('id') za_province.crs={'init': '27700'} wc_boundary = za_province.ix[8].geometry # WC #pp_WC = geo_pol[geo_pol.geometry.within(wc_boundary)] pp_WC_in = geo_pol[geo_pol.geometry.intersects(wc_boundary)] #.unary_union, sal_wc_union_bound = sal_WC_in.unary_union pp_WC_overlaps = pp_WC_in[pp_WC_in.province!="Western Cape"] pp_WC_pol_annot = pp_WC_in[pp_WC_in.province=="Western Cape"] #pp_test = pp_WC_in[pp_WC_in['compnt_nm'].isin(['atlantis','philadelphia','kraaifontein','brackenfell','kuilsriver','kleinvleveerste river','macassar','somerset west','fish hoek'])] #pp_test = pp_WC_in[pp_WC_in['compnt_nm'].isin(['beaufort west','doring bay','murraysburg', 'strandfontein','nuwerus','lutzville'])] %matplotlib inline #pp_WC_overlaps.plot()
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Adding final columns:
# There are 101,546 intersections df_int_aea = compute_final_col(df_int) # add final calculations df_int_aea.to_csv('data/pp_int_intersections2.csv')
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Some intersections are multipolygons (PP and SAL intersect in multiple areas):
df_int_aea.head(n=3).values[2][0]
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
There are curious cases of intersections, which form polygons. For example,a Free State police precinct 'dewetsdorp' with murder count of 1 (yet high rate of Stock-theft- 52 in 2014) intersects the SAL 4990011 (part of SP Mangaung NU) in two lines:
geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0] geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0] a = geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0] b= geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0] c = [geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0],geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0]] cascaded_union(c) from shapely.ops import cascaded_union cascaded_union(b) geo_sal_reind[geo_sal_reind.sal_index==28532] df_int_aea.to_file('data/pp_int_intersections.shp') # When reading from a file" import pandas as pd df_int_aea = pd.read_csv('data/pp_int_intersections.csv') # when reading from file a column Unnamed is added. Needs to be removed. cols = [c for c in df_int_aea.columns if c.lower()[:7] != 'unnamed'] df_int_aea=df_int_aea[cols] df_int_aea.head(n=2) data_prov = df_int_aea[['PR_NAME','province','murd_est_per_int']] data_prov.groupby('province')['murd_est_per_int'].sum() data_prov.groupby('PR_NAME')['murd_est_per_int'].sum() # check over small areas- sum of all the crimes should be 17482 pom = {} for ind, row in df_inc_aea.iterrows(): pom[row['index_SAL']] = row['murd_est_per_sal'] s=0 for key in pom: s = s + pom[key] print(s)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
measuring the error of the 'CPS' estimateComputing the lower (LB) and upper bounds (UB), wherever possible, is done the following way:UB: based the calcualation of population per PP on all SALs included entirely within PP. If not possible, set to NaNLB: find all SALs intersecting a given PP, but base the PP population estimation on the population of the entire SAL, not the population of the intersection.As a result, each intersection will have a triplet of values associated to it: (LB, actual estimate, UB/NaN). The bounds are not additive- that is, the estimates applies only to the level of SAL area, and will not be maintained when summed over, e.g. SP or MN For modyfying/selecting entries for bound estimation, we discard the last 4 columns with precomputed values
df_int=df_int_aea.ix[:,:20] # this function adds the remaining columns, calculates fractions etc def compute_final_col_bounds(df_aea): #recalculate pop frac per PP temp = df_aea.groupby(by=['index_PP'])['popu_inter'].sum().reset_index() data_with_population = pd.merge(df_aea, temp, on='index_PP', how='outer')\ .rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'}) data_with_population['murd_est_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\ * data_with_population['murd_cnt'] data_mur_per_int = data_with_population.groupby(by=['index_SAL'])['murd_est_per_int'].sum().reset_index() data_mur_per_sal = data_mur_per_int.rename(columns={'murd_est_per_int':'murd_est_per_sal'}) data_with_population['ratio_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\ data_complete = pd.merge(data_with_population, data_mur_per_sal, on='index_SAL', how='outer') #\ .rename(columns={'id1':'index_PP', 'id2':'index_SAL'}) return data_complete
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
create new tables for the LB and UB
list_lb =[] list_ub = [] for i,entry in df_int.iterrows():#f_inc_aea: if (entry.area_inter/entry.area_sal==1): # select those included 'completely' list_ub.append(entry) entry.popu_inter = entry.popu_sal # this is actually already true for the above if() case list_lb.append(entry) df_int_aea_ub_p=gpd.GeoDataFrame(list_ub) df_int_aea_lb_p=gpd.GeoDataFrame(list_lb) df_int_aea_lb = compute_final_col_bounds(df_int_aea_lb_p)\ .rename(columns={'murd_est_per_int':'murd_est_per_int_lb',\ 'ratio_per_int':'ratio_per_int_lb','murd_est_per_sal':'murd_est_per_sal_lb'}) # complete df_int_aea_ub = compute_final_col_bounds(df_int_aea_ub_p)\ .rename(columns={'murd_est_per_int':'murd_est_per_int_ub',\ 'ratio_per_int':'ratio_per_int_ub','murd_est_per_sal':'murd_est_per_sal_ub'}) #check if numbers add up per province level (invariant for inclusion): data_prov = df_int_aea_ub[['PR_NAME','province','murd_est_per_int_ub']] data_prov.groupby('province')['murd_est_per_int_ub'].sum() temp_ub = df_int_aea_ub.groupby(by=['SP_CODE'])['murd_est_per_int_ub'].sum().reset_index() temp_lb = df_int_aea_lb.groupby(by=['SP_CODE'])['murd_est_per_int_lb'].sum().reset_index() temp_est = df_int_aea.groupby(by=['SP_CODE'])['murd_est_per_int'].sum().reset_index() temp = pd.merge(temp_lb, temp_est, on='SP_CODE', how='outer') df_bounds = pd.merge(temp, temp_ub, on='SP_CODE', how='outer')
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
At the level of SP (and probably others) some bounds are inverted... UB < LB (2,242 out of 21,589)
#mn_bounds_def = mn_bounds[~mn_bounds.UB_murder.isnull()] df_inv_bounds = df_bounds[df_bounds.murd_est_per_int_ub<df_bounds.murd_est_per_int_lb] df_inv_bounds.tail() temp_ub = df_int_aea_ub.groupby(by=['SAL_CODE'])['murd_est_per_int_ub'].sum().reset_index() temp_lb = df_int_aea_lb.groupby(by=['SAL_CODE'])['murd_est_per_int_lb'].sum().reset_index() temp_est = df_int_aea.groupby(by=['SAL_CODE'])['murd_est_per_int'].sum().reset_index() # .rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'}) temp = pd.merge(temp_lb, temp_est, on='SAL_CODE', how='outer') df_bounds = pd.merge(temp, temp_ub, on='SAL_CODE', how='outer') mn_names_set = set(df_int_aea_lb.MN_NAME) mn_names = [] for s in mn_names_set: mn_names.append(s) df_bounds.head(n=2) df_bound_nonan = df_bounds[~df_bounds.murd_est_per_int_ub.isnull()&df_bounds.murd_est_per_int>0].sort(['murd_est_per_int'])
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Plotting the lower and upper bounds:
import warnings warnings.filterwarnings('ignore') import mpld3 from mpld3 import plugins from mpld3.utils import get_id #import numpy as np import collections from mpld3 import enable_notebook enable_notebook() def make_labels_points(dataf): L = len(dataf) x = np.array(dataf['murd_est_per_int_lb']) y = np.array(dataf['murd_est_per_int_ub']) z = np.array(dataf['murd_est_per_int']) l = np.array(dataf['SAL_CODE']) d = y-x # error s = " " sc = ", err: " seq = [] seqc = [] t = [seq.append(s.join((str(l[i]), str(z[i])))) for i in range(L)] t = [seqc.append(sc.join((seq[i], str(d[i])))) for i in range(L)] return seqc, L def make_scatter(dataf, outname, outtitle): l = np.array(dataf['SAL_CODE']) x = np.array(dataf['murd_est_per_int_lb']) y = np.array(dataf['murd_est_per_int_ub']) z = np.array(dataf['murd_est_per_int']) d = y-x # error # build a rectangle in axes coords left, width = .15, .7 bottom, height = .09, .75 right = left + width top = bottom + height fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE')) N=len(dataf) scatter = ax.scatter(range(1,N+1),z,c=100*d,s=1000*d,alpha=0.3, cmap=plt.cm.jet, color='blue', label='...') ax.set_title(outtitle, size=15) seqc, L = make_labels_points(dataf) labels12 = ['(SAL id, est: {0}'.format(seqc[i]) for i in range(L)] tooltip = plugins.PointLabelTooltip(scatter, labels=labels12) plugins.connect(fig, tooltip) ax.set_xlabel('SAL') ax.set_ylabel('murder rate', labelpad = 20) html_str = mpld3.fig_to_html(fig) Html_file= open(outname,"w") Html_file.write(html_str) Html_file.close() make_scatter(df_bound_nonan.head(n=8000), 'bounds.html', "SAL estimation bounds") df_bound_nonan[df_bound_nonan.SAL_CODE==3760001] df_int_aea_ub[df_int_aea_ub.SAL_CODE==3760001] df_int_aea_lb[df_int_aea_lb.SAL_CODE==3760001] df_int_aea_lb[df_int_aea_lb.index_PP==551] df_int_aea[df_int_aea.index_PP==551]
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Add gender data:
full_pop = pd.read_csv('data/sal_pop.csv') def get_ratio(i,full_pop): try: x = int(full_pop.iloc[i,].Female)/(int(full_pop.iloc[i,].Male)+int(full_pop.iloc[i,].Female)) except: x =0 return x wom_ratio = [get_ratio(i,full_pop) for i in range(len(full_pop))] full_pop['wom_ratio'] = wom_ratio full_pop.drop('Male', axis=1, inplace=True) data_full = pd.merge(df_int_aea, full_pop, on='SAL_CODE') data_full.head()
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
WARDS:
wardsShp =gpd.GeoDataFrame.from_file('../maps/data/Wards2011_aea.shp') wardsShp.head(n=2) za_province = gpd.GeoDataFrame.from_file('../south_africa_adm1.shp')#.set_index('id') %matplotlib inline #import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from descartes import PolygonPatch import fiona from shapely.geometry import Polygon, MultiPolygon, shape # We can extract the London Borough boundaries by filtering on the AREA_CODE key mp = MultiPolygon( [shape(pol['geometry']) for pol in fiona.open('../south_africa_adm1.shp')]) mpW = MultiPolygon( [shape(pol['geometry']) for pol in fiona.open('../wards_delimitation/Wards_demarc/Wards2011.shp')]) mpS = MultiPolygon( [shape(pol['geometry']) for pol in fiona.open('shapefiles/oryginal/SAL_SA_2013.shp')]) # define map extent lllon = 21 lllat = -18 urlon = 34 urlat = -8 # set up Basemap instance m = Basemap( projection = 'merc', llcrnrlon = lllon, llcrnrlat = lllat, urcrnrlon = urlon, urcrnrlat = urlat, resolution='h') # We can now do GIS-ish operations on each borough polygon! # we could randomize this by dumping the polygons into a list and shuffling it # or we could define a random colour using fc=np.random.rand(3,) # available colour maps are here: http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps cm = plt.get_cmap('RdBu') num_colours = len(mpW) fig = plt.figure(figsize=(16, 16)) ax = fig.add_subplot(111) minx, miny, maxx, maxy = mp.bounds w, h = maxx - minx, maxy - miny ax.set_xlim(minx - 0.2 * w, maxx + 0.2 * w) ax.set_ylim(miny - 0.2 * h, maxy + 0.2 * h) ax.set_aspect(1) patches = [] for idx, p in enumerate(mp): #colour = cm(1. * idx / num_colours) patches.append(PolygonPatch(p, alpha=1., zorder=1)) for idx, p in enumerate(mpW): colour = cm(1. * idx / num_colours) patches.append(PolygonPatch(p, ec='#4C4C4C', alpha=1., zorder=1)) for idx, p in enumerate(mpS): colour = cm(1. * idx / num_colours) patches.append(PolygonPatch(p, ec='#4C4C4C', alpha=1., zorder=1)) ax.add_collection(PatchCollection(patches, match_original=True)) ax.set_xticks([]) ax.set_yticks([]) plt.title("SAL on Wards") #plt.savefig('data/london_from_shp.png', alpha=True, dpi=300) plt.show() # define map extent lllon = 15 lllat = -35 urlon = 33 urlat = -22 # set up Basemap instance m = Basemap( projection = 'merc', llcrnrlon = lllon, llcrnrlat = lllat, urcrnrlon = urlon, urcrnrlat = urlat, resolution='h') fig = plt.figure(figsize=(16, 16)) m.drawmapboundary(fill_color=None, linewidth=0) m.drawcoastlines(color='#4C4C4C', linewidth=0.5) m.drawcountries() m.fillcontinents(color='#F2E6DB',lake_color='#DDF2FD') #m.readshapefile('../wards_delimitation/Wards_demarc/Wards2011.sbh','Wards',drawbounds=False) m.readshapefile('../maps/data/test','wards',drawbounds=False) from itertools import chain shp = fiona.open('../maps/data/test.shp') bds = shp.bounds shp.close() extra = 0.01 ll = (bds[0], bds[1]) ur = (bds[2], bds[3]) coords = list(chain(ll, ur)) w, h = coords[2] - coords[0], coords[3] - coords[1] m = Basemap( projection='tmerc', lon_0=24.000, lat_0=-24.0000, ellps = 'WGS84', llcrnrlon=coords[0] - extra * w, llcrnrlat=coords[1] - extra + 0.01 * h, urcrnrlon=coords[2] + extra * w, urcrnrlat=coords[3] + extra + 0.01 * h, lat_ts=0, resolution='i', suppress_ticks=True) m.readshapefile( '../maps/data/test', 'wards', color='none', zorder=2)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
clean the utf problems
from unidecode import unidecode with fiona.open( '../maps/data/wards_sel.shp', 'r') as source: # Create an output shapefile with the same schema, # coordinate systems. ISO-8859-1 encoding. with fiona.open( '../maps/data/wards_sel_cleaned.shp', 'w', **source.meta) as sink: # Identify all the str type properties. str_prop_keys = [ k for k, v in sink.schema['properties'].items() if v.startswith('str')] for rec in source: # Transliterate and update each of the str properties. for key in str_prop_keys: val = rec['properties'][key] if val: rec['properties'][key] = unidecode(val) # Write out the transformed record. sink.write(rec) salSHP = 'shapefiles/updated/sal_population_4326.shp' warSHP = '../wards_delimitation/Wards_demarc/Wards2011.shp' geo_war = gpd.GeoDataFrame.from_file(warSHP) geo_sal = gpd.GeoDataFrame.from_file(salSHP) import pyepsg pyepsg.get(geo_war.crs['init'].split(':')[1]) pyepsg.get(geo_sal.crs['init'].split(':')[1])
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
to plot the data on a folium map, we need to convert to a Geographic coordinate system with the wgs84 datum (EPSG: 4326). We also need to greate a GeoJSON object out of the GeoDataFrame. AND! as it turns out (many hourse of tripping over the problem) to SIMPLIFY the geometries. They are too big for webmaps.
warSHP = '../maps/data/Wards2011.shp' geo_war = gpd.GeoDataFrame.from_file(warSHP) #geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd) geo_war.head(n=2) geo_war_sub = geo_war.iloc[:,[2,3,7,8,9]].reset_index().head(n=2) #g = geo_war_sub.simplify(0.05, preserve_topology=False) geo_war_sub.head(n=3) geo_war_sub.to_file('../maps/data/wards_sel.shp') geo_war_sub['geometry'].replace(g,inplace=True) #data['index_rank'].replace(index_dict, inplace=True) geo_war_sub_sim.head(n=2) salSHP = 'shapefiles/updated/sal_population.shp' geo_sal = gpd.GeoDataFrame.from_file(salSHP) #geo_sal.head(n=2) geo_sal_sub = geo_sal.iloc[:,[7,11,15,16,20,23]].reset_index()#.head() geo_sal_sub.to_file('../maps/data/sal_sub.shp') #gjsonSal = geo_sal.to_crs(epsg='4326').to_json()# no need to convert, as it already is in 4326 #gjsonSal = geo_sal.to_json() #gjsonWar = geo_war.to_json() gj = g.to_json() import folium #import pandas as pd lllon = 15 lllat = -35 urlon = 33 urlat = -22 #state_geo = r'shapefiles/updated/sal_population.json' #ward_path = r'../maps/data/test.geojson' #state_geo = r'shapefiles/oryginal/SAL_SA_2013.json' state_geo = r'../maps/data/sal.json' #state_geo = r'temp_1E-7.topojson' #Let Folium determine the scale map = folium.Map(location=[(lllat+urlat)/2, (lllon+urlon)/2], tiles='Mapbox Bright',zoom_start=6) #, tiles='cartodbpositron') #map.geo_json(geo_path=state_geo) #map.geo_json(geo_path=state_geoW) #map.geo_json(geo_path=ward_path) map.create_map(path='test.html') state_geo lllon = 15 lllat = -35 urlon = 33 urlat = -22 import folium #map = folium.Map(location=[-33.9249, 18.4241], zoom_start=10) mapa = folium.Map([(lllat+urlat)/2, (lllon+urlon)/2], zoom_start=7, tiles='cartodbpositron') #pSal = folium.features.GeoJson(gjsonSal) #pWae = folium.features.GeoJson(gjsonWar) #mapa.add_children(pSal) #mapa.add_children(pWar) #mapa.geo_json(gj) #test = folium.folium.Map.geo_json(gj) #ice_map.geo_json(geo_path=topo_path, topojson='objects.antarctic_ice_shelf') #mapa.add_children(test) mapa.create_map(path='test.html') testshp = '../maps/data/test.shp' geo_test = gpd.GeoDataFrame.from_file(testshp) import pyepsg pyepsg.get(geo_test.crs['init'].split(':')[1]) gjson = geo_test.to_json() import folium geo_path = r'../maps/data/test.json' map_osm = folium.Map(location=[-24.5236, 24.6750],zoom_start=6) map_osm.geo_json(geo_path=geo_path) map_osm.create_map(path='osm.html')
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
analytics based on intersections:
def find_intersections(o): from collections import defaultdict paired_ind = [o.pp_index, o.sal_index] d_over_ind = defaultdict(list) # creating a dictionary that has prescints as keys and associated small areas as values for i in range(len(paired_ind[0].values)): if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection d_over_ind[paired_ind[0].values[i]].append(paired_ind[1].values[i]) # get rid of the pol precincts with no small areas associated to them- not the most efficient way d_temp = {} for l in d_over_ind: if len(d_over_ind[l]): d_temp[l] = d_over_ind[l] return d_temp def calculate_join_indices(g1_reind, g2_reind): out = sjoin(g1_reind, g2_reind, how ="inner", op = "intersects") out.drop('index_right', axis=1, inplace=True) dict_over_ind = find_intersections(out) return dict_over_ind #warSHP = '../maps/data/Wards2011_aea.shp' #geo_war = gpd.GeoDataFrame.from_file(warSHP) #salSHP = 'shapefiles/updated/sal_population_aea.shp' #geo_sal = gpd.GeoDataFrame.from_file(salSHP) #geo_sal = geo_sal.reset_index() #geo_war_sub = geo_war.iloc[:,[2,3,7,8,9]].reset_index()#.head(n=2) out = sjoin(geo_war_sub, geo_sal, how ="inner", op = "intersects") out_sub = out.iloc[:,[2,3,5,6,15,23,24,28]].reset_index().rename(columns={'index':'index_ward','index_right':'index_sal'}) geo_war_sub = geo_war_sub.rename(columns={'index':'index_ward'})#head(n=2) #head(n=2) geo_sal_sub = geo_sal.iloc[:,[5,11,16,17,19,21,24]].reset_index().rename(columns={'index':'index_sal'}) from collections import defaultdict paired_ind = [out_sub.index_ward, out_sub.index_sal] dict_temp = defaultdict(list) # creating a dictionary that has prescints as keys and associated small areas as values for i in range(len(paired_ind[0].values)): if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection dict_temp[paired_ind[0].values[i]].append(paired_ind[1].values[i]) dict_int_ward = {} for l in dict_temp: if len(dict_temp[l]): dict_int_ward[l] = dict_temp[l] #dict_int_ward def calculate_join_ward_sal(dict_over_ind, g1_reind, g2_reind): area_total = 0 data_aggreg = [] # note to self: make sure to import shapely Polygon for index1, row in g1_reind.iterrows(): #print(index1, row.index_ward) try: index1 = row.index_ward sals_found = dict_over_ind[index1] for sal in range(len(sals_found)): pom = g2_reind[g2_reind.index_sal == sals_found[sal]]['geometry'] area_int = pom.intersection(row['geometry']).area.values[0] area_sal = pom.values[0].area int_percent = area_int/area_sal #popu_count = g2_reind[g2_reind.sal_index == sals_found[sal]]['PPL_CNT'].values[0] extra_info_col = ['MP_NAME','PR_NAME','SAL_CODE','SP_NAME'] extra_names = g2_reind[g2_reind.index_sal == sals_found[sal]][extra_info_col]#.filter(regex=("NAME")) #extra_names = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_names]#.filter(regex=("NAME")) data_aggreg.append({'geometry': pom.intersection(row['geometry']).values[0],\ 'id1': index1,'ward_id': row.WARD_ID,'id2': sals_found[sal] ,'area_int': area_int,\ 'area_sal': area_sal,'int_percent': int_percent,\ 'MP_NAME': extra_names.MP_NAME.values[0],\ 'PR_NAME': extra_names.PR_NAME.values[0],'SAL_CODE': extra_names.SAL_CODE.values[0],\ 'SP_NAME': extra_names.SP_NAME.values[0]} ) except: pass cols=['geometry', 'id1','ward_id','id2','area_int','area_sal','int_percent','MP_NAME','PR_NAME','SAL_CODE','SP_NAME'] df_t = gpd.GeoDataFrame(data_aggreg,columns=cols) #df_t.to_file('shapefiles/sal_ward.shp') return df_t from timeit import default_timer as timer start = timer() df = calculate_join_ward_sal(dict_int_ward,geo_war_sub, geo_sal_sub) end = timer() print("time: ", end - start) df.head() df.to_csv('df.csv') df_nc = df[df.int_percent<1] #df.groupby(by=['ward_id']).sum() s = df_nc.groupby(by=['PR_NAME','ward_id']) type(s) #There are 4277 wards len(geo_war) # all wards have intersections len(set(df_nc.ward_id)) #84907 SAL areas len(geo_sal_sub) # half of the intersect len(set(df_nc.SAL_CODE))
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
40515 out of 84907 SALs intersect ward borders.Let's see whether the intersections generated from PP and SAL fit better.
#trying the intersections geo_int_p = pd.read_csv('data/pp_int_intersections.csv') geo_war_sub.crs #geo_int.head(n=2) geo_int = gpd.GeoDataFrame(geo_int_p, crs=geo_war_sub.crs) #geo_int.head(n=2) cols = [c for c in geo_int.columns if c.lower()[:7] != 'unnamed'] geo_int = geo_int[cols] geo_int.head(n=2) geo_int_sub = geo_int.iloc[:,[1,2,0]].reset_index().rename(columns={'index':'index_int'}) geo_sal_sub.head(n=1) geo_int_sub.geometry.head() geo_war_sub.head(n=2) out = sjoin(geo_war_sub.head(n=1), geo_int_sub, how ="inner", op = "intersects") geo_war_sub.head(n=2) type(geo_int) geo_int.crs test = gpd.GeoDataFrame(pd.read_csv('data/pp_test2.csv')) geo_war_sub.to_csv('auch.csv') test.plot() f,ax = plt.subplots(1) gpd.plotting.plot_multipolygon(ax, df_int.head(n=2).geometry.values[0], linewidth = 0.1, edgecolr='grey') plt.show() df_int.head(n=2).geometry.values[0]
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Checklist for submissionIt is extremely important to make sure that:1. Everything runs as expected (no bugs when running cells);2. The output from each cell corresponds to its code (don't change any cell's contents without rerunning it afterwards);3. All outputs are present (don't delete any of the outputs);4. Fill in all the places that say ` YOUR CODE HERE`, or "**Your answer:** (fill in here)".5. You should not need to create any new cells in the notebook, but feel free to do it if convenient for you.6. The notebook contains some hidden metadata which is important during our grading process. **Make sure not to corrupt any of this metadata!** The metadata may be corrupted if you perform an unsuccessful git merge / git pull. It may also be pruned completely if using Google Colab, so watch out for this. Searching for "nbgrader" when opening the notebook in a text editor should take you to the important metadata entries.7. Fill in your group number and the full names of the members in the cell below;8. Make sure that you are not running an old version of IPython (we provide you with a cell that checks this, make sure you can run it without errors).Failing to meet any of these requirements might lead to either a subtraction of POEs (at best) or a request for resubmission (at worst).We advise you the following steps before submission for ensuring that requirements 1, 2, and 3 are always met: **Restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All). This might require a bit of time, so plan ahead for this (and possibly use Google Cloud's GPU in HA1 and HA2 for this step). Finally press the "Save and Checkout" button before handing in, to make sure that all your changes are saved to this .ipynb file. ---Group number and member names:
GROUP = "" NAME1 = "" NAME2 = ""
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Make sure you can run the following cell without errors.
import IPython assert IPython.version_info[0] >= 3, "Your version of IPython is too old, please update it."
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
--- Home Assignment 3This home assignment will focus on reinforcement learning and deep reinforcement learning. The first part will cover value-table reinforcement learning techniques, and the second part will include neural networks as function approximators, i.e. deep reinforcement learning. When handing in this assignment, make sure that you're handing in the correct version, and more importantly, *that you do no clear any output from your cells*. We'll use these outputs to aid us when grading your assignment. Task 1: GridworldIn this task, you will implement Value Iteration to solve for the optimal policy, $\pi^*$, and the corresponding state value function, $V^*$.The MDP you will work with in this assignment is illustrated in the figure below![title](./grid_world.png) The agent starts in one of the squares shown in the above figure, and then proceeds to take actions. The available actions at any time step are: **North, West, South,** and **East**. If an action would make the agent bump into a wall, or one of the black (unreachable) states, it instead does nothing, leaving the agent at the same place it was before.The reward $R_s^a$ of being in state $s$ and performing actions $a$ is zero for all states, regardless of the action taken, with the exception of the green and the red squares. For the green square, the reward is always 1, and for the red square, always -1, regardless of the action.When the agent is either in the green or the red square, it will be transported to the terminal state in the next time step, regardless of the action taken. The terminal state is shown as the white square with the "T" inside. State representationThe notations used to define the states are illustrated in the table below| $S_0$ | $S_1$ | $S_2$ | $S_3$ | $S_4$ | ||-------|-------|-------|-------|-------|----|| $S_5$ | $S_6$ | $S_7$ | $S_8$ | $S_9$ | || $S_{10}$ | $S_{11}$ | $S_{12}$ | $S_{13}$ | $S_{14}$ | $S_{15}$|where $S_{10}$ corresponds to the initial state of the environment, $S_4$ and $S_9$ to the green and red states of the environment, and $S_{15}$ to the terminal state. Task 1.a: Solve for $V^*(s)$ and $Q^*(s,a)$For this task all transition probabilities are assumed to be 1 (that is, trying to move in a certain direction will definitely move the agent in the chosen direction), and a discount factor of .9, i.e. $\gamma=.9$. * Solve for $V^*(S_{10})$ **Your answer:** (fill in here) * Solve $Q^*(S_{10},a)$ for all actions**Your answer:** (fill in here) Task 1.b Write a mathematical expression relating $V^\pi(s)$ to $Q^\pi(s,a)$ and $\pi(a|s)$ **Your answer:** (fill in here) Task 1.c: Value IterationFor this task, the transitions are no longer deterministic. Instead, there is a 0.2 probability that the agent will try to travel in an orthogonal direction of the chosen action (0.1 probability for each of the two orthogonal directions). Note that the Markov decision process is still known and does not have to be learned from experience.Your task is to implement value iteration and solve for the* optimal greedy policy $\pi^*(s)$ * $V^*(s)$ The value iteration algorithmValue iteration is an iterative algorithm used to compute the optimal value function $V^*(s)$. Each iteration starts with a guess of what the value function is and then uses the Bellman equations to improve this guess iteratively. We can describe one iteration of the algorithm as$\textbf{For} \quad s \in {\cal S}:\qquad \\\quad \textbf{For} \quad \, a \in {\cal A}: \\\qquad Q(s,a) = \sum_{s'\in S} T(s,a,s')\left(R(s,a,s') + \gamma V(s') \right)\\ \quad V(s) = \underset{a}{\text{max}}~ Q(s,a)$where $T(s, a, s')={\mathrm Pr}[S'=s'\big|S=s,A=a]$ is the probability to transition state $s$ to $s'$ given action $a$. The MDP Python classThe Markov Decision Process you will work with is defined in `gridworld_mpd.py`. In the implementation, the actions are represented by integers as, North = 0, West = 1, South = 2, and East = 3.To interact with the MDP, you need to instantiate an object as: ```pythonmdp = GridWorldMDP()```At your disposal there are a number of instance-functions implemented for you, and presented below:
from gridworld_mdp import * import numpy as np help(GridWorldMDP.get_states) # The constructor help(GridWorldMDP.__init__) help(GridWorldMDP.get_actions) help(GridWorldMDP.state_transition_func) help(GridWorldMDP.reward_function)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
We also provide two helper functions for visualizing the value function and the policies you obtain:
# Function for printing a policy pi def print_policy(pi): print('Policy for non-terminal states: ') indencies = np.arange(1, 16) txt = '| ' hor_delimiter = '---------------------' print(hor_delimiter) for a, i in zip(pi, indencies): txt += mdp.act_to_char_dict[a] + ' | ' if i % 5 == 0: print(txt + '\n' + hor_delimiter) txt = '| ' print(' ---') print('Policy for terminal state: |', mdp.act_to_char_dict[pi[15]],'|') print(' ---') # Function for printing a table with of the value function def print_value_table(values, num_iterations=None): if num_iterations: print('Values for non-terminal states after: ', num_iterations, 'iterations \n', np.reshape(values, [3, 5]), '\n') print('Value for terminal state:', terminal_value, '\n') else: terminal_value = values[-1] print('Values for non-terminal states: \n', np.reshape(values[:-1], [3, 5])) print('Value for terminal state:', terminal_value, '\n')
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Now it's time for you to implement your own version of value iteration to solve for the greedy policy and $V^*(s)$.
def value_iteration(gamma, mdp): V = np.zeros([16]) # state value table Q = np.zeros([16, 4]) # state action value table pi = np.zeros([16]) # greedy policy table # Complete this function return V, pi
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run your implementation for the deterministic version of our MDP. As a sanity check, compare your analytical solutions with the output from your implementation.
mdp = GridWorldMDP(trans_prob=1.) v, pi = value_iteration(.9, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Once your implementation passed the sanity check, run it for the stochastic case, where the probability of an action succeding is 0.8, and 0.2 of moving the agent in an orthogonal direction to the intended. Use $\gamma = .99$.
# Run for stochastic MDP, gamma = .99 mdp = GridWorldMDP() v, pi = value_iteration(.99, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Does the policy that the algorithm found looks reasonable? For instance, what's the policy for state $S_8$? Is that a good idea? Why?**Your answer**: (fill in here) Test your implementation using this function.
test_value_iteration(v, pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run value iteration for the same scenario as above, but now with $\gamma=.9$
# Run for stochastic MDP, gamma = .9 mdp = GridWorldMDP() v, pi = value_iteration(.9, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Do you notice any difference between the greedy policy for the two different discount factors. If so, what's the difference, and why do you think this happened? **Your answer:** (fill in here) Task 2: Q-learningIn the previous task, you solved for $V^*(s)$ and the greedy policy $\pi^*(s)$, with the entire model of the MDP being available to you. This is however not very practical since for most problems we are trying to solve, the model is not known, and estimating the model is quite often a very tedious process which often also requires a lot of simplifications. Q-learning algorithm$\text{Initialize}~Q(s,a), ~ \forall~ s \in {\cal S},~ a~\in {\cal A} \\\textbf{Repeat}~\text{(for each episode):}\\\quad \text{Initialize}~s\\\qquad \textbf{Repeat}~\text{(for each step in episode):}\\\qquad\quad \text{Chose $a$ from $s$ using poliy derived from $Q$ (e.g., $\epsilon$-greedy)}\\\qquad\quad \text{Take action a, observe r, s'}\\\qquad\quad Q(s,a) \leftarrow Q(s,a) + \alpha \left(r + \gamma~\underset{a}{\text{max}}~Q(s',a) - Q(s,a) \right) \\\qquad\quad s \leftarrow s' \\\qquad \text{Until s is terminal}$ Task 2.1 Model-free controlWhy is it that Q-learning does not require a model of the MDP to solve for it? **Your answer:** (fill in here) Task 2.2 Implement an $\epsilon$-greedy policyThe goal of the Q-learning algorithm is to find the optimal policy $\pi^*$, by estimating the state action value function under the optimal policy, i.e. $Q^*(s, a)$. From $Q^*(s,a)$, the agent can follow $\pi^*$, by choosing the action with that yields the largest expected value for each state, i.e. $\underset{a}{\text{argmax}}~Q^*(s, a)$.However, when training a Q-learning model, the agent typically follows another policy to explore the environment. In reinforcement learning this is known as off-policy learning. Your task is to implement a widely popular exploration policy, known as the $\epsilon$-greedy policy, in the cell below.An $\epsilon$-Greedy policy should:* with probability $\epsilon$ take an uniformly-random action.* otherwise choose the best action according to the estimated state action values.
def eps_greedy_policy(q_values, eps): ''' Creates an epsilon-greedy policy :param q_values: set of Q-values of shape (num actions,) :param eps: probability of taking a uniform random action :return: policy of shape (num actions,) ''' # Complete this function return policy
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run the cell below to test your implementation
mdp = GridWorldMDP() # Test shape of output actions = mdp.get_actions() for eps in (0, 1): foo = np.zeros([len(actions)]) foo[0] = 1. eps_greedy = eps_greedy_policy(foo, eps) assert foo.shape == eps_greedy.shape, "wrong shape of output" actions = [i for i in range(10)] for eps in (0, 1): foo = np.zeros([len(actions)]) foo[0] = 1. eps_greedy = eps_greedy_policy(foo, eps) assert foo.shape == eps_greedy.shape, "wrong shape of output" # Test for greedy actions for a in actions: foo = np.zeros([len(actions)]) foo[a] = 1. eps_greedy = eps_greedy_policy(foo, 0) assert np.array_equal(foo, eps_greedy), "policy is not greedy" # Test for uniform distribution, when eps=1 eps_greedy = eps_greedy_policy(foo, 1) assert all(p==eps_greedy[0] for p in eps_greedy) and np.sum(eps_greedy)==1, \ "policy does not return a uniform distribution for eps=1" print('Test passed, good job!')
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Task 2.2: Implement the Q-learning algorithmNow it's time to actually implement the Q-learning algorithm. Unlike the Value iteration where there is no direct interactions with the environment, the Q-learning algorithm builds up its estimations by interacting and exploring the environment. To enable the agent to explore the environment a set of helper functions are provided:
help(GridWorldMDP.reset) help(GridWorldMDP.step)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Implement your version of Q-learning in the cell below. **Hint:** It might be useful to study the pseudocode provided above.
def q_learning(eps, gamma): Q = np.zeros([16, 4]) # state action value table pi = np.zeros([16]) # greedy policy table alpha = .01 # Complete this function return pi, Q
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run Q-learning with $\epsilon = 1$ for the MDP with $\gamma=0.99$
pi, Q = q_learning(1, .99) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Test your implementation by running the cell below
test_q_learning(Q)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run Q-learning with $\epsilon=0$
pi, Q = q_learning(0, .99) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
You ran your implementation with $\epsilon$ set to both 0 and 1. What are the results, and your conclusions? **Your answer:** (fill in here) Task 3: Deep Double Q-learning (DDQN)For this task, you will implement a DDQN (double deep Q-learning network) to solve one of the problems of the OpenAI gym. Before we get into details about these type of networks, let's first review the simpler, DQN (deep Q-learning network) version. Deep Q NetworksAs we saw in the video lectures, using a neural network as a state action value approximator is a great idea. However, if one tries to use this approach with Q-learning, it's very likely that the optimization will be very unstable. To remediate this, two main ideas are used. First, we use experience replay, in order to decorrelate the experience samples we obtain when exploring the environment. Second, we use two networks instead of one, in order to fix the optimization targets. That is, for a given minibatch sampled from the replay buffer, we'll optimize the weights of only one of the networks (commonly denoted as the "online" network), using the gradients w.r.t a loss function. This loss function is computed as the mean squared error between the current action values, computed according to the **online** network, and the temporal difference (TD) targets, computed using the other, **fixed network** (which we'll refer to as the "target" network).That is, the loss function is $$ L(\theta) = \frac{1}{N}\sum_{i=1}^N \left(Q(s_i,a_i; \theta\right) - Y_i)^2~,$$where $N$ is the number of samples in your minibatch, $Q(s,a;\theta)$ is the state action value estimate, according to the online network (with parameters $\theta$), and $Y_t$ is the TD target, computed as$$ Y_i = r_i + \gamma ~\underset{a}{\text{max}}~Q(s_i', a; \theta^-)~, $$where $Q(s', a;\theta')$ is the action value estimate, according to the fixed network (with parameters $\theta^-$).Finally, so that the offline parameters are also updated, we periodically change the roles of the networks, fixing the online one, and training the other. Double Deep Q NetworksThe idea explained above works well in practice, but later it was discovered that this approach is very prone to overestimating the state action values. The main reason for this is that the max operator, used to select the greedy action when computing the TD target, uses the same values both to select and to evaluate an action (this tends to prefer overestimated actions). In order to prevent this, we can decouple the selection from the evaluation, which is the idea that created DDQN. More concretely, the TD target for a DDQN is now $$ Y_i = r_i + \gamma Q(s_i', \underset{a}{\text{argmax}}Q(s_i',a;\theta); \theta^-)~. $$Hence, we're using the **online** network to select which action is best, but we use the **fixed** network to evaluate the state action value for that chosen action in the next state. This is what makes DDQN not overestimate (as much) the state action values, which in turn helps us to train faster and obtain better policies. EnvironmentThe problem you will solve for this task is the inverted pendulum problem. On [Open AIs environment documentation](https://gym.openai.com/envs/CartPole-v0) , the following description is provided:*A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every time step that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.*![title](./cartpole.jpg) ImplementationWe'll solve this task using a DDQN. Most of the code is provided for you, in the file **ddqn_model.py**. This file contains the implementation of a neural network, which is described in the table below (feel free to experiment with different architectures).|Layer 1: units, activation | Layer 2: units, activation | Layer 3: units, activation | Cost function ||---------------------------|----------------------------|----------------------------|---------------|| 100, ReLu | 60, ReLu | number of actions, linear | MSE |The only missing part of the code is the function that computes the TD targets for each minibatch of samples. Task 3.1: Calculate TD-targetFor this task, you will calculate the temporal difference target used for the loss in the double Q-learning algorithm. Your implementation should follow precisely the equation defined above for the TD target of DDQNs, with one exception: when s' is terminal, the TD target for it should simply be $ Y_i = r_i$. Why is this necessary?**Your answer**: (fill in here)Implement your function in the following cell.
def calculate_td_targets(q1_batch, q2_batch, r_batch, t_batch, gamma=.99): ''' Calculates the TD-target used for the loss : param q1_batch: Batch of Q(s', a) from online network, shape (N, num actions) : param q2_batch: Batch of Q(s', a) from target network, shape (N, num actions) : param r_batch: Batch of rewards, shape (N, 1) : param t_batch: Batch of booleans indicating if state, s' is terminal, shape (N, 1) : return: TD-target, shape (N, 1) ''' # Complete this function return Y
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Test your implementation by trying to solve the reinforcement learning problem for the Cartpole environment. The following cell defines the `train_loop_ddqn` function, which will be called ahead,
# Import dependencies import numpy as np import gym from keras.utils.np_utils import to_categorical as one_hot from collections import namedtuple from dqn_model import DoubleQLearningModel, ExperienceReplay def train_loop_ddqn(model, env, num_episodes, batch_size=64, gamma=.94): Transition = namedtuple("Transition", ["s", "a", "r", "next_s", "t"]) eps = 1. eps_end = .1 eps_decay = .001 R_buffer = [] R_avg = [] for i in range(num_episodes): state = env.reset() #reset to initial state state = np.expand_dims(state, axis=0)/2 terminal = False # reset terminal flag ep_reward = 0 q_buffer = [] steps = 0 while not terminal: env.render() # comment this line out if ou don't want to render the environment steps += 1 q_values = model.get_q_values(state) q_buffer.append(q_values) policy = eps_greedy_policy(q_values.squeeze(), eps) action = np.random.choice(num_actions, p=policy) # sample action from epsilon-greedy policy new_state, reward, terminal, _ = env.step(action) # take one step in the evironment new_state = np.expand_dims(new_state, axis=0)/2 # only use the terminal flag for ending the episode and not for training # if the flag is set due to that the maximum amount of steps is reached t_to_buffer = terminal if not steps == 200 else False # store data to replay buffer replay_buffer.add(Transition(s=state, a=action, r=reward, next_s=new_state, t=t_to_buffer)) state = new_state ep_reward += reward # if buffer contains more than 1000 samples, perform one training step if replay_buffer.buffer_length > 1000: s, a, r, s_, t = replay_buffer.sample_minibatch(batch_size) # sample a minibatch of transitions q_1, q_2 = model.get_q_values_for_both_models(np.squeeze(s_)) td_target = calculate_td_targets(q_1, q_2, r, t, gamma) model.update(s, td_target, a) eps = max(eps - eps_decay, eps_end) # decrease epsilon R_buffer.append(ep_reward) # running average of episodic rewards R_avg.append(.05 * R_buffer[i] + .95 * R_avg[i-1]) if i > 0 else R_avg.append(R_buffer[i]) print('Episode: ', i, 'Reward:', ep_reward, 'Epsilon', eps, 'mean q', np.mean(np.array(q_buffer))) # if running average > 195, the task is considerd solved if R_avg[-1] > 195: return R_buffer, R_avg return R_buffer, R_avg
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
and the next cell performs the actual training. A Working implementation should start to improve after 500 episodes. An episodic reward of around 200 is likely to be achieved after 800 episodes for a batchsize of 128, and 1000 episodes for a batchsize of 64.
# Create the environment env = gym.make("CartPole-v0") # Initializations num_actions = env.action_space.n obs_dim = env.observation_space.shape[0] # Our Neural Netork model used to estimate the Q-values model = DoubleQLearningModel(state_dim=obs_dim, action_dim=num_actions, learning_rate=1e-4) # Create replay buffer, where experience in form of tuples <s,a,r,s',t>, gathered from the environment is stored # for training replay_buffer = ExperienceReplay(state_size=obs_dim) # Train num_episodes = 1200 batch_size = 128 R, R_avg = train_loop_ddqn(model, env, num_episodes, batch_size) # close window env.close()
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
According to the code above, and the code in the provided .py file, answer the following questions: What is the state for this problem?**Your answer**: (fill in here)When do we switch the networks (i.e. when does the online network become the fixed one, and vice-versa)?**Your answer**: (fill in here) Run the cell below to visualize your final policy in an episode from this environment.
import time num_episodes = 1 env = gym.make("CartPole-v0") for i in range(num_episodes): state = env.reset() #reset to initial state state = np.expand_dims(state, axis=0)/2 terminal = False # reset terminal flag while not terminal: env.render() time.sleep(.05) q_values = model.get_q_values(state) policy = eps_greedy_policy(q_values.squeeze(), .1) # greedy policy action = np.random.choice(num_actions, p=policy) state, reward, terminal, _ = env.step(action) # take one step in the evironment state = np.expand_dims(state, axis=0)/2 # close window env.close();
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Plot the episodic rewards obtained throughout the optimization, together with a moving average of it (since the episodic reward is usually very noisy).
%matplotlib inline import matplotlib.pyplot as plt rewards = plt.plot(R, alpha=.4, label='R') avg_rewards = plt.plot(R_avg,label='avg R') plt.legend(bbox_to_anchor=(1.01, 1), loc=2, borderaxespad=0.) plt.xlabel('Episode') plt.ylim(0, 210) plt.show()
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
import numpy as np import cv2 import glob import matplotlib.pyplot as plt import matplotlib.image as mpimg # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML %matplotlib qt import cv2 from lib import CameraCalibrate, EstimateWrapParameterWrapper, IMREAD, warp_image, BinarizeImage, fit_polynomial, search_around_poly, cal_undistort, measure_curvature_real ## Initialize # initialize paths calDataPath = 'camera_cal' testDataPath = 'test_images' # initialize parameters mtx, dist = CameraCalibrate(calDataPath) M, Minv = EstimateWrapParameterWrapper(mtx, dist, False, True, 'straight_lines1*.jpg') #M = EstimateWrapParameterWrapper(mtx, dist, True, False) #M = EstimateWrapParameterWrapper(mtx, dist) #print(M) #print(Minv)
FALSE [[-6.15384615e-01 -1.37820513e+00 9.69230769e+02] [ 1.97716240e-16 -1.96794872e+00 8.90769231e+02] [ 0.00000000e+00 -2.40384615e-03 1.00000000e+00]] [[ 1.43118893e-01 -7.85830619e-01 5.61278502e+02] [-2.27373675e-16 -5.08143322e-01 4.52638436e+02] [-2.41886889e-19 -1.22149837e-03 1.00000000e+00]]
MIT
main.ipynb
sharifchowdhury/Advanced-Lane-finding
And so on and so forth...
left_fit = [] right_fit = [] left_fitx_old = [] right_fitx_old = [] ind= 0 cr=[] pt=[] def init(): global left_fit global right_fit global left_fitx_old global right_fitx_old global ind global cr global pt left_fit = [] right_fit = [] left_fitx_old = [] right_fitx_old = [] ind= 0 cr=[] pt=[] def Recast(warped,undist,left_fitx, right_fitx, ploty): #global cr #global pt # Create an image to draw the lines on warp_zero = np.zeros_like(warped).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) ## DEBUG #print('pts::') #print( color_warp.shape ) #print(pts) #cr =color_warp #pt = pts cv2.fillPoly(color_warp, np.int_(pts), (0,255, 0)) newwarp = cv2.warpPerspective(color_warp, Minv, (color_warp.shape[1], color_warp.shape[0])) result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0) return result def updateFit(old,new,x_old,x,thres= 50): delta = np.mean(np.absolute(x_old-x)) if delta > thres: res = old ret_X = x_old else: res = new ret_X = x return res, ret_X def process_image(frame, viz=False, name='None'): global left_fit global right_fit global left_fitx_old global right_fitx_old global ind print('In process_image2') #plt.imshow(frame) img = cal_undistort(frame, mtx,dist); if viz: cv2.imwrite('rawdata/'+name +'or.tif', cv2.cvtColor(frame,cv2.COLOR_BGR2RGB )) cv2.imwrite('rawdata/'+name +'or_cal.tif', cv2.cvtColor(img,cv2.COLOR_BGR2RGB )) #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB ) #figure(ind) #plt.imshow(img) img = warp_image(img, M) #cv2.imwrite('rawdata/' + str(ind)+'im.tif', img) binaryImage = BinarizeImage(img,s_thresh=(170, 255), sx_thresh=(20, 150)) #cv2.imwrite('rawdata/' + str(ind)+'msk.tif', mask*255) wrappedBinaryI = binaryImage[:,:,0]+ binaryImage[:,:,1]+binaryImage[:,:,2] wrappedBinaryI[wrappedBinaryI>0]=1 #BI = BI*mask #wrappedBinaryI = BI if viz: cv2.imwrite('rawdata/' + name +'or_cal_wr.tif', cv2.cvtColor(img,cv2.COLOR_BGR2RGB )) cv2.imwrite('rawdata/' + name +'or_cal_wr_bin.tif', wrappedBinaryI*255) if len(left_fit) == 0: outIM, left_fit, right_fit = fit_polynomial(wrappedBinaryI) result, left_fitx, right_fitx, ploty, left_fit_new, right_fit_new, l_fit_2, r_fit_2 = search_around_poly(wrappedBinaryI, left_fit, right_fit) left_fitx_old = left_fitx right_fitx_old = right_fitx else: result, left_fitx, right_fitx, ploty, left_fit_new, right_fit_new, l_fit_2, r_fit_2 = search_around_poly(wrappedBinaryI, left_fit, right_fit) left_fit, left_fitx_old = updateFit(left_fit,left_fit_new,left_fitx_old, left_fitx, thres= 50) right_fit,right_fitx_old = updateFit(right_fit,right_fit_new,right_fitx_old,right_fitx, thres= 50) resFinal = Recast(img[:,:,0],frame,left_fitx_old, right_fitx_old, ploty) lrad, rrad, posdelta = measure_curvature_real(l_fit_2, r_fit_2, ploty, Minv) rad_str = 'Radius of Curvature = '+ str((lrad+rrad)//2 ) + '(m)' if posdelta<0: pos = 'left' else: pos = 'right' pos_str = 'Vehicle is ' + str(np.absolute( ((posdelta*100)//1)/100 ) ) + 'm ' +pos + ' of center' resFinal = cv2.putText(resFinal, rad_str, (100,100), cv2.FONT_HERSHEY_SIMPLEX , 1, (255, 255, 255) , 2, cv2.LINE_AA) resFinal = cv2.putText(resFinal, pos_str, (100,200), cv2.FONT_HERSHEY_SIMPLEX , 1, (255, 255, 255) , 2, cv2.LINE_AA) if viz: cv2.imwrite('rawdata/' + name + 'or_res.tif', cv2.cvtColor(resFinal,cv2.COLOR_BGR2RGB ) ) ind+=1 return resFinal from moviepy.editor import VideoFileClip #white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) #clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") #temp = 'test_videos_output/solidWhiteRight.mp4' left_fit = [] right_fit = [] #clip1 = VideoFileClip('project_video.mp4') clip1 = VideoFileClip('project_video.mp4') #challenge_video white_output = 'project_video_out.mp4' white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ## PIPELINE STARTS HEREwarp_image fname = testDataPath + '/' + 'test1.jpg'; #fname = calDataPath + '/' + 'calibration1.jpg'; frame = cv2.imread(fname) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB ) init() res=process_image(frame, True, 'test1') plt.imshow(res) #for i in range(3) : # plt.figure(i+1) # plt.imshow( frame[:,:,i]) print( np.linalg.inv(M) ) a=np.array( [( 91 ,233), (419 ,227), (410, 324), ( 94, 349)], 'int32') #print (a.checkVector(2, CV_32S) ) cv2.fillPoly(cr, a, (0,255, 0)) a=np.int_(pt) print(str( ((0.111 * 100)//1)/100 )) print(cr.shape) print() ) res = np.matmul(M, [550, 480 ,1]) print(res/res[2]) print (range(10))
range(0, 10)
MIT
main.ipynb
sharifchowdhury/Advanced-Lane-finding
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training an Object Detection model using AutoMLIn this notebook, we go over how you can use AutoML for training an Object Detection model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models). ![img](example_object_detection_predictions.jpg) **Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/). Environment SetupPlease follow the ["Setup a new conda environment"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml3-setup-a-new-conda-environment) instructions to get started. Workspace setupIn order to train and deploy models in Azure ML, you will first need to set up a workspace.An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architectureworkspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.Create an Azure ML Workspace within your Azure subscription or load an existing workspace.
from azureml.core.workspace import Workspace ws = Workspace.from_config()
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Compute target setupYou will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecturecomputes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model.
from azureml.core.compute import AmlCompute, ComputeTarget cluster_name = "gpu-cluster-nc6" try: compute_target = ws.compute_targets[cluster_name] print("Found existing compute target.") except KeyError: print("Creating a new compute target...") compute_config = AmlCompute.provisioning_configuration( vm_size="Standard_NC6", idle_seconds_before_scaledown=600, min_nodes=0, max_nodes=4, ) compute_target = ComputeTarget.create(ws, cluster_name, compute_config) # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion( show_output=True, min_node_count=None, timeout_in_minutes=20 )
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Experiment SetupCreate an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architectureexperiments) in your workspace to track your model training runs
from azureml.core import Experiment experiment_name = "automl-image-object-detection" experiment = Experiment(ws, name=experiment_name)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Dataset with input Training DataIn order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data. In this notebook, we use a toy dataset called Fridge Objects, which consists of 128 images of 4 classes of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds.All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).We first download and unzip the data locally.
import os import urllib from zipfile import ZipFile # download data download_url = "https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip" data_file = "./odFridgeObjects.zip" urllib.request.urlretrieve(download_url, filename=data_file) # extract files with ZipFile(data_file, "r") as zip: print("extracting files...") zip.extractall() print("done") # delete zip file os.remove(data_file)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
This is a sample image from this dataset:
from IPython.display import Image Image(filename="./odFridgeObjects/images/31.jpg")
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Convert the downloaded data to JSONLIn this example, the fridge object dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file.
import json import os import xml.etree.ElementTree as ET src = "./odFridgeObjects/" train_validation_ratio = 5 # Retrieving default datastore that got automatically created when we setup a workspace workspaceblobstore = ws.get_default_datastore().name # Path to the annotations annotations_folder = os.path.join(src, "annotations") # Path to the training and validation files train_annotations_file = os.path.join(src, "train_annotations.jsonl") validation_annotations_file = os.path.join(src, "validation_annotations.jsonl") # sample json line dictionary json_line_sample = { "image_url": "AmlDatastore://" + workspaceblobstore + "/" + os.path.basename(os.path.dirname(src)) + "/" + "images", "image_details": {"format": None, "width": None, "height": None}, "label": [], } # Read each annotation and convert it to jsonl line with open(train_annotations_file, "w") as train_f: with open(validation_annotations_file, "w") as validation_f: for i, filename in enumerate(os.listdir(annotations_folder)): if filename.endswith(".xml"): print("Parsing " + os.path.join(src, filename)) root = ET.parse(os.path.join(annotations_folder, filename)).getroot() width = int(root.find("size/width").text) height = int(root.find("size/height").text) labels = [] for object in root.findall("object"): name = object.find("name").text xmin = object.find("bndbox/xmin").text ymin = object.find("bndbox/ymin").text xmax = object.find("bndbox/xmax").text ymax = object.find("bndbox/ymax").text isCrowd = int(object.find("difficult").text) labels.append( { "label": name, "topX": float(xmin) / width, "topY": float(ymin) / height, "bottomX": float(xmax) / width, "bottomY": float(ymax) / height, "isCrowd": isCrowd, } ) # build the jsonl file image_filename = root.find("filename").text _, file_extension = os.path.splitext(image_filename) json_line = dict(json_line_sample) json_line["image_url"] = json_line["image_url"] + "/" + image_filename json_line["image_details"]["format"] = file_extension[1:] json_line["image_details"]["width"] = width json_line["image_details"]["height"] = height json_line["label"] = labels if i % train_validation_ratio == 0: # validation annotation validation_f.write(json.dumps(json_line) + "\n") else: # train annotation train_f.write(json.dumps(json_line) + "\n") else: print("Skipping unknown file: {}".format(filename))
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Convert annotation file from COCO to JSONLIf you want to try with a dataset in COCO format, the scripts below shows how to convert it to `jsonl` format. The file "odFridgeObjects_coco.json" consists of annotation information for the `odFridgeObjects` dataset.
# Generate jsonl file from coco file !python coco2jsonl.py \ --input_coco_file_path "./odFridgeObjects_coco.json" \ --output_dir "./odFridgeObjects" --output_file_name "odFridgeObjects_from_coco.jsonl" \ --task_type "ObjectDetection" \ --base_url "AmlDatastore://workspaceblobstore/odFridgeObjects/images/"
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Visualize bounding boxesPlease refer to the "Visualize data" section in the following [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-image-modelsvisualize-data) to see how to easily visualize your ground truth bounding boxes before starting to train. Upload the JSONL file and images to DatastoreIn order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecturedatasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage.
# Retrieving default datastore that got automatically created when we setup a workspace ds = ws.get_default_datastore() ds.upload(src_dir="./odFridgeObjects", target_path="odFridgeObjects")
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation.
from azureml.core import Dataset from azureml.data import DataType # get existing training dataset training_dataset_name = "odFridgeObjectsTrainingDataset" if training_dataset_name in ws.datasets: training_dataset = ws.datasets.get(training_dataset_name) print("Found the training dataset", training_dataset_name) else: # create training dataset training_dataset = Dataset.Tabular.from_json_lines_files( path=ds.path("odFridgeObjects/train_annotations.jsonl"), set_column_types={"image_url": DataType.to_stream(ds.workspace)}, ) training_dataset = training_dataset.register( workspace=ws, name=training_dataset_name ) # get existing validation dataset validation_dataset_name = "odFridgeObjectsValidationDataset" if validation_dataset_name in ws.datasets: validation_dataset = ws.datasets.get(validation_dataset_name) print("Found the validation dataset", validation_dataset_name) else: # create validation dataset validation_dataset = Dataset.Tabular.from_json_lines_files( path=ds.path("odFridgeObjects/validation_annotations.jsonl"), set_column_types={"image_url": DataType.to_stream(ds.workspace)}, ) validation_dataset = validation_dataset.register( workspace=ws, name=validation_dataset_name ) print("Training dataset name: " + training_dataset.name) print("Validation dataset name: " + validation_dataset.name)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-modelsmodel-agnostic-hyperparameters) for more details.This is what the training dataset looks like:
training_dataset.to_pandas_dataframe()
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Configuring your AutoML run for image tasksAutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-modelsconfigure-your-experiment-settings) for the details on the parameters that can be used and their values. When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-modelsconfigure-model-algorithms-and-hyperparameters) for the list of supported model algorithms. Using default hyperparameter values for the specified algorithmBefore doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.If you wish to use the default hyperparameter values for a given algorithm (say `yolov5`), you can specify the config for your AutoML Image runs as follows:
from azureml.automl.core.shared.constants import ImageTask from azureml.train.automl import AutoMLImageConfig from azureml.train.hyperdrive import GridParameterSampling, choice image_config_yolov5 = AutoMLImageConfig( task=ImageTask.IMAGE_OBJECT_DETECTION, compute_target=compute_target, training_data=training_dataset, validation_data=validation_dataset, hyperparameter_sampling=GridParameterSampling({"model_name": choice("yolov5")}), iterations=1, )
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Submitting an AutoML run for Computer Vision tasksOnce you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset.
automl_image_run = experiment.submit(image_config_yolov5) automl_image_run.wait_for_completion(wait_post_processing=True)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Hyperparameter sweeping for your AutoML models for computer vision tasksIn this example, we use the AutoMLImageConfig to train an Object Detection model using `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over 200K labeled images with over 80 label categories.When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.We use Random Sampling to pick samples from this parameter space and try a total of 10 iterations with these different samples, running 2 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
from azureml.automl.core.shared.constants import ImageTask from azureml.train.automl import AutoMLImageConfig from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling from azureml.train.hyperdrive import choice, uniform parameter_space = { "model": choice( { "model_name": choice("yolov5"), "learning_rate": uniform(0.0001, 0.01), "model_size": choice("small", "medium"), # model-specific #'img_size': choice(640, 704, 768), # model-specific; might need GPU with large memory }, { "model_name": choice("fasterrcnn_resnet50_fpn"), "learning_rate": uniform(0.0001, 0.001), "optimizer": choice("sgd", "adam", "adamw"), "min_size": choice(600, 800), # model-specific #'warmup_cosine_lr_warmup_epochs': choice(0, 3), }, ), } tuning_settings = { "iterations": 10, "max_concurrent_iterations": 2, "hyperparameter_sampling": RandomParameterSampling(parameter_space), "early_termination_policy": BanditPolicy( evaluation_interval=2, slack_factor=0.2, delay_evaluation=6 ), } automl_image_config = AutoMLImageConfig( task=ImageTask.IMAGE_OBJECT_DETECTION, compute_target=compute_target, training_data=training_dataset, validation_data=validation_dataset, **tuning_settings, ) automl_image_run = experiment.submit(automl_image_config) automl_image_run.wait_for_completion(wait_post_processing=True)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
from azureml.core import Run hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + "_HD") hyperdrive_run
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Register the optimal vision model from the AutoML runOnce the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)
# Register the model from the best run best_child_run = automl_image_run.get_best_child() model_name = best_child_run.properties["model_name"] model = best_child_run.register_model( model_name=model_name, model_path="outputs/model.pt" )
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Deploy model as a web serviceOnce you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.In this tutorial, we will deploy the model as a web service in AKS. You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster
from azureml.core.compute import ComputeTarget, AksCompute from azureml.exceptions import ComputeTargetException # Choose a name for your cluster aks_name = "cluster-aks-cpu" # Check to see if the cluster already exists try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print("Found existing compute target") except ComputeTargetException: print("Creating a new compute target...") # Provision AKS cluster with a CPU machine prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_D3_V2") # Create the cluster aks_target = ComputeTarget.create( workspace=ws, name=aks_name, provisioning_configuration=prov_config ) aks_target.wait_for_completion(show_output=True)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-modelsupdate-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.Note: To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model.
from azureml.core.model import InferenceConfig best_child_run.download_file( "outputs/scoring_file_v_1_0_0.py", output_file_path="score.py" ) environment = best_child_run.get_environment() inference_config = InferenceConfig(entry_script="score.py", environment=environment)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
You can then deploy the model as an AKS web service.
# Deploy the model from the best run as an AKS web service from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration( autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True ) aks_service = Model.deploy( ws, models=[model], inference_config=inference_config, deployment_config=aks_config, deployment_target=aks_target, name="automl-image-test-od", overwrite=True, ) aks_service.wait_for_deployment(show_output=True) print(aks_service.state)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Test the web serviceFinally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI.
import requests # URL for the web service scoring_uri = aks_service.scoring_uri # If the service is authenticated, set the key or token key, _ = aks_service.get_keys() sample_image = "./test_image.jpg" # Load image data data = open(sample_image, "rb").read() # Set the content type headers = {"Content-Type": "application/octet-stream"} # If authentication is enabled, set the authorization header headers["Authorization"] = f"Bearer {key}" # Make the request and display the response resp = requests.post(scoring_uri, data, headers=headers) print(resp.text)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Visualize detectionsNow that we have scored a test image, we can visualize the bounding boxes for this image
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.patches as patches from PIL import Image import numpy as np import json IMAGE_SIZE = (18, 12) plt.figure(figsize=IMAGE_SIZE) img_np = mpimg.imread(sample_image) img = Image.fromarray(img_np.astype("uint8"), "RGB") x, y = img.size fig, ax = plt.subplots(1, figsize=(15, 15)) # Display the image ax.imshow(img_np) # draw box and label for each detection detections = json.loads(resp.text) for detect in detections["boxes"]: label = detect["label"] box = detect["box"] conf_score = detect["score"] if conf_score > 0.6: ymin, xmin, ymax, xmax = ( box["topY"], box["topX"], box["bottomY"], box["bottomX"], ) topleft_x, topleft_y = x * xmin, y * ymin width, height = x * (xmax - xmin), y * (ymax - ymin) print( "{}: [{}, {}, {}, {}], {}".format( detect["label"], round(topleft_x, 3), round(topleft_y, 3), round(width, 3), round(height, 3), round(conf_score, 3), ) ) color = np.random.rand(3) #'red' rect = patches.Rectangle( (topleft_x, topleft_y), width, height, linewidth=3, edgecolor=color, facecolor="none", ) ax.add_patch(rect) plt.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20) plt.show()
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
__author__="CANSEL KUNDUKAN" print("ADAM ASMACA OYUNUNA HOŞGELDİNİZ...") print("ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz") from random import choice while True: kelime = choice (["ispanya", "almanya","japonya","ingiltere","brezilya","mısır","macaristan","hindistan"]) kelime = kelime.upper() harfsayisi = len(kelime) print("Kelimemiz {} harflidir.\n".format(harfsayisi)) tahminler = [] hata = [] KalanCan = 3 while KalanCan > 0: bos = "" for girilenharf in kelime: if girilenharf in tahminler: bos = bos + girilenharf else: bos = bos + " _ " if bos == kelime: print("Tebrikler!") break print("Kelimeyi Tahmin Ediniz", bos) print(KalanCan, "Canınız Kaldı") Tahmin = input("Bir Harf Giriniz :") Tahmin = Tahmin.upper() if Tahmin == kelime: print("\n\n Tebrikler\n\n") break elif Tahmin in kelime: rpt = kelime.count(Tahmin) print("Dogru.{0} Harfi Kelimemiz İçerisinde {1} Kere Geçiyor".format(Tahmin, rpt)) tahminler.append(Tahmin) else: print("Yanlış.") hata.append(Tahmin) KalanCan = KalanCan - 1 if KalanCan == 0: print("\n\nHiç Hakkınız Kalmadı.") print("Kelimemiz {}\n\n".format(kelime)) print("Oyundan Çıkmak İstiyorsanız\n'X' Tuşuna Basınız\nDevam Etmek İçin -> ENTER. ") devam = input(":") devam = devam.upper() if devam == "X": break else: continue
ADAM ASMACA OYUNUNA HOŞGELDİNİZ... ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz Kelimemiz 9 harflidir. Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _ 3 Canınız Kaldı Bir Harf Giriniz :b Yanlış. Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _ 2 Canınız Kaldı Bir Harf Giriniz :a Dogru.A Harfi Kelimemiz İçerisinde 1 Kere Geçiyor Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ A _ 2 Canınız Kaldı Bir Harf Giriniz :i Dogru.I Harfi Kelimemiz İçerisinde 2 Kere Geçiyor Kelimeyi Tahmin Ediniz _ I _ _ I _ _ A _ 2 Canınız Kaldı Bir Harf Giriniz :m Yanlış. Kelimeyi Tahmin Ediniz _ I _ _ I _ _ A _ 1 Canınız Kaldı Bir Harf Giriniz :b Yanlış. Hiç Hakkınız Kalmadı. Kelimemiz HINDISTAN Oyundan Çıkmak İstiyorsanız 'X' Tuşuna Basınız Devam Etmek İçin -> ENTER.
MIT
adam_asmaca.ipynb
canselkundukan/bby162
Truncated regression: minimum working example
import numpy as np %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import pymc3 as pm import arviz as az def pp_plot(x, y, trace): fig, ax = plt.subplots() # plot data ax.scatter(x, y) # plot posterior predicted... samples from posterior xi = np.array([np.min(x), np.max(x)]) n_samples=100 for n in range(n_samples): y_ppc = xi * trace["m"][n] + trace["c"][n] ax.plot(xi, y_ppc, "k", alpha=0.1, rasterized=True) # plot true ax.plot(xi, m * xi + c, "r", lw=3, label="True") # plot bounds ax.axhline(bounds[0], c='r', ls='--') ax.axhline(bounds[1], c='r', ls='--') def truncate_y(x, y, bounds): keep = (y >= bounds[0]) & (y <= bounds[1]) return (x[keep], y[keep]) m, c, σ, N = 1, 0, 2, 200 x = np.random.uniform(-10, 10, N) y = np.random.normal(m * x + c, σ) bounds = [-5, 5] xt, yt = truncate_y(x, y, bounds) plt.scatter(xt, yt)
_____no_output_____
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
Linear regression of truncated data underestimates the slope
def linear_regression(x, y): with pm.Model() as model: m = pm.Normal("m", mu=0, sd=1) c = pm.Normal("c", mu=0, sd=1) σ = pm.HalfNormal("σ", sd=1) y_likelihood = pm.Normal("y_likelihood", mu=m*x+c, sd=σ, observed=y) with model: trace = pm.sample() return model, trace # run the model on the truncated data (xt, yt) linear_model, linear_trace = linear_regression(xt, yt) az.plot_posterior(linear_trace, var_names=['m'], ref_val=m) pp_plot(xt, yt, linear_trace)
_____no_output_____
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
Truncated regression avoids this underestimate
def truncated_regression(x, y, bounds): with pm.Model() as model: m = pm.Normal("m", mu=0, sd=1) c = pm.Normal("c", mu=0, sd=1) σ = pm.HalfNormal("σ", sd=1) y_likelihood = pm.TruncatedNormal( "y_likelihood", mu=m * x + c, sd=σ, observed=y, lower=bounds[0], upper=bounds[1], ) with model: trace = pm.sample() return model, trace # run the model on the truncated data (xt, yt) truncated_model, truncated_trace = truncated_regression(xt, yt, bounds) az.plot_posterior(truncated_trace, var_names=['m'], ref_val=m) pp_plot(xt, yt, truncated_trace) %load_ext watermark %watermark -n -u -v -iv -w
Last updated: Sun Jan 24 2021 Python implementation: CPython Python version : 3.8.5 IPython version : 7.19.0 arviz : 0.11.0 pymc3 : 3.10.0 numpy : 1.19.2 matplotlib: 3.3.2 Watermark: 2.1.0
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
get the minst dataset
batch_size = 128 num_classes = 10 epochs = 100 # input image dimensions img_rows, img_cols = 28, 28 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(6, (5, 5), activation='relu', input_shape = input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(16, (5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(120, activation='relu')) model.add(Dense(84, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
Visualize the model
from IPython.display import SVG from keras.utils.vis_utils import plot_model plot_model(model, show_shapes=True, show_layer_names=True)
_____no_output_____
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
![title](./model.png) Train the model
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples Epoch 1/100 60000/60000 [==============================] - 2s - loss: 0.3232 - acc: 0.9029 - val_loss: 0.1030 - val_acc: 0.9701 Epoch 2/100 60000/60000 [==============================] - 1s - loss: 0.0855 - acc: 0.9744 - val_loss: 0.0740 - val_acc: 0.9774 Epoch 3/100 60000/60000 [==============================] - 2s - loss: 0.0620 - acc: 0.9802 - val_loss: 0.0505 - val_acc: 0.9835 Epoch 4/100 60000/60000 [==============================] - 2s - loss: 0.0477 - acc: 0.9847 - val_loss: 0.0426 - val_acc: 0.9853 Epoch 5/100 60000/60000 [==============================] - 1s - loss: 0.0397 - acc: 0.9878 - val_loss: 0.0396 - val_acc: 0.9864 Epoch 6/100 60000/60000 [==============================] - 2s - loss: 0.0362 - acc: 0.9884 - val_loss: 0.0385 - val_acc: 0.9876 Epoch 7/100 60000/60000 [==============================] - 2s - loss: 0.0284 - acc: 0.9909 - val_loss: 0.0376 - val_acc: 0.9879 Epoch 8/100 60000/60000 [==============================] - 2s - loss: 0.0269 - acc: 0.9912 - val_loss: 0.0330 - val_acc: 0.9894 Epoch 9/100 60000/60000 [==============================] - 2s - loss: 0.0240 - acc: 0.9921 - val_loss: 0.0315 - val_acc: 0.9900 Epoch 10/100 60000/60000 [==============================] - 2s - loss: 0.0197 - acc: 0.9935 - val_loss: 0.0352 - val_acc: 0.9883 Epoch 11/100 60000/60000 [==============================] - 2s - loss: 0.0174 - acc: 0.9941 - val_loss: 0.0337 - val_acc: 0.9895 Epoch 12/100 60000/60000 [==============================] - 2s - loss: 0.0159 - acc: 0.9947 - val_loss: 0.0352 - val_acc: 0.9894 Epoch 13/100 60000/60000 [==============================] - 2s - loss: 0.0139 - acc: 0.9953 - val_loss: 0.0368 - val_acc: 0.9896 Epoch 14/100 60000/60000 [==============================] - 1s - loss: 0.0140 - acc: 0.9954 - val_loss: 0.0314 - val_acc: 0.9909 Epoch 15/100 60000/60000 [==============================] - 2s - loss: 0.0117 - acc: 0.9961 - val_loss: 0.0393 - val_acc: 0.9881 Epoch 16/100 60000/60000 [==============================] - 2s - loss: 0.0108 - acc: 0.9963 - val_loss: 0.0395 - val_acc: 0.9894 Epoch 17/100 60000/60000 [==============================] - 2s - loss: 0.0098 - acc: 0.9965 - val_loss: 0.0418 - val_acc: 0.9897 Epoch 18/100 60000/60000 [==============================] - 2s - loss: 0.0105 - acc: 0.9965 - val_loss: 0.0430 - val_acc: 0.9881 Epoch 19/100 60000/60000 [==============================] - 1s - loss: 0.0076 - acc: 0.9974 - val_loss: 0.0401 - val_acc: 0.9897 Epoch 20/100 60000/60000 [==============================] - 1s - loss: 0.0071 - acc: 0.9975 - val_loss: 0.0427 - val_acc: 0.9890 Epoch 21/100 60000/60000 [==============================] - 1s - loss: 0.0088 - acc: 0.9972 - val_loss: 0.0362 - val_acc: 0.9904 Epoch 22/100 60000/60000 [==============================] - 1s - loss: 0.0073 - acc: 0.9977 - val_loss: 0.0449 - val_acc: 0.9886 Epoch 23/100 60000/60000 [==============================] - 1s - loss: 0.0082 - acc: 0.9972 - val_loss: 0.0437 - val_acc: 0.9891 Epoch 24/100 60000/60000 [==============================] - 1s - loss: 0.0049 - acc: 0.9983 - val_loss: 0.0361 - val_acc: 0.9908 Epoch 25/100 60000/60000 [==============================] - 1s - loss: 0.0050 - acc: 0.9982 - val_loss: 0.0376 - val_acc: 0.9905 Epoch 26/100 60000/60000 [==============================] - 2s - loss: 0.0090 - acc: 0.9969 - val_loss: 0.0546 - val_acc: 0.9871 Epoch 27/100 60000/60000 [==============================] - 2s - loss: 0.0047 - acc: 0.9983 - val_loss: 0.0450 - val_acc: 0.9904 Epoch 28/100 60000/60000 [==============================] - 1s - loss: 0.0055 - acc: 0.9980 - val_loss: 0.0429 - val_acc: 0.9886 Epoch 29/100 60000/60000 [==============================] - 1s - loss: 0.0039 - acc: 0.9989 - val_loss: 0.0528 - val_acc: 0.9877 Epoch 30/100 60000/60000 [==============================] - 2s - loss: 0.0056 - acc: 0.9980 - val_loss: 0.0477 - val_acc: 0.9891 Epoch 31/100 60000/60000 [==============================] - 1s - loss: 0.0044 - acc: 0.9984 - val_loss: 0.0498 - val_acc: 0.9888 Epoch 32/100 60000/60000 [==============================] - 1s - loss: 0.0044 - acc: 0.9985 - val_loss: 0.0501 - val_acc: 0.9897 Epoch 33/100 60000/60000 [==============================] - 1s - loss: 0.0043 - acc: 0.9984 - val_loss: 0.0493 - val_acc: 0.9895 Epoch 34/100 60000/60000 [==============================] - 1s - loss: 0.0029 - acc: 0.9991 - val_loss: 0.0530 - val_acc: 0.9896 Epoch 35/100 60000/60000 [==============================] - 1s - loss: 0.0053 - acc: 0.9984 - val_loss: 0.0445 - val_acc: 0.9908 Epoch 36/100 60000/60000 [==============================] - 1s - loss: 0.0054 - acc: 0.9983 - val_loss: 0.0502 - val_acc: 0.9902 Epoch 37/100 60000/60000 [==============================] - 1s - loss: 0.0049 - acc: 0.9984 - val_loss: 0.0449 - val_acc: 0.9907 Epoch 38/100 60000/60000 [==============================] - 1s - loss: 0.0048 - acc: 0.9986 - val_loss: 0.0483 - val_acc: 0.9900 Epoch 39/100 60000/60000 [==============================] - 1s - loss: 0.0021 - acc: 0.9994 - val_loss: 0.0576 - val_acc: 0.9892 Epoch 40/100 60000/60000 [==============================] - 2s - loss: 0.0025 - acc: 0.9992 - val_loss: 0.0535 - val_acc: 0.9900 Epoch 41/100 60000/60000 [==============================] - 1s - loss: 0.0060 - acc: 0.9982 - val_loss: 0.0673 - val_acc: 0.9869 Epoch 42/100 60000/60000 [==============================] - 2s - loss: 0.0040 - acc: 0.9987 - val_loss: 0.0417 - val_acc: 0.9912 Epoch 43/100 60000/60000 [==============================] - 1s - loss: 0.0026 - acc: 0.9991 - val_loss: 0.0498 - val_acc: 0.9902 Epoch 44/100 60000/60000 [==============================] - 2s - loss: 0.0022 - acc: 0.9993 - val_loss: 0.0545 - val_acc: 0.9899 Epoch 45/100 60000/60000 [==============================] - 2s - loss: 0.0057 - acc: 0.9982 - val_loss: 0.0477 - val_acc: 0.9906 Epoch 46/100 60000/60000 [==============================] - 2s - loss: 0.0023 - acc: 0.9991 - val_loss: 0.0565 - val_acc: 0.9900 Epoch 47/100 60000/60000 [==============================] - 2s - loss: 0.0039 - acc: 0.9987 - val_loss: 0.0538 - val_acc: 0.9907 Epoch 48/100 60000/60000 [==============================] - 1s - loss: 0.0012 - acc: 0.9996 - val_loss: 0.0528 - val_acc: 0.9901 Epoch 49/100 60000/60000 [==============================] - 1s - loss: 0.0066 - acc: 0.9981 - val_loss: 0.0478 - val_acc: 0.9909 Epoch 50/100 60000/60000 [==============================] - 1s - loss: 0.0011 - acc: 0.9996 - val_loss: 0.0493 - val_acc: 0.9913 Epoch 51/100 60000/60000 [==============================] - 2s - loss: 0.0011 - acc: 0.9997 - val_loss: 0.0486 - val_acc: 0.9907 Epoch 52/100 60000/60000 [==============================] - 2s - loss: 0.0061 - acc: 0.9981 - val_loss: 0.0626 - val_acc: 0.9892 Epoch 53/100 60000/60000 [==============================] - 1s - loss: 0.0043 - acc: 0.9988 - val_loss: 0.0609 - val_acc: 0.9886 Epoch 54/100 60000/60000 [==============================] - 2s - loss: 0.0024 - acc: 0.9992 - val_loss: 0.0521 - val_acc: 0.9908 Epoch 55/100 60000/60000 [==============================] - 2s - loss: 0.0020 - acc: 0.9994 - val_loss: 0.0532 - val_acc: 0.9915 Epoch 56/100 60000/60000 [==============================] - 2s - loss: 0.0025 - acc: 0.9993 - val_loss: 0.0577 - val_acc: 0.9893 Epoch 57/100 60000/60000 [==============================] - 2s - loss: 0.0047 - acc: 0.9985 - val_loss: 0.0550 - val_acc: 0.9896 Epoch 58/100 60000/60000 [==============================] - 1s - loss: 0.0026 - acc: 0.9993 - val_loss: 0.0436 - val_acc: 0.9912 Epoch 59/100 60000/60000 [==============================] - 2s - loss: 5.6958e-04 - acc: 0.9998 - val_loss: 0.0433 - val_acc: 0.9922 Epoch 60/100 60000/60000 [==============================] - 2s - loss: 4.2636e-04 - acc: 0.9999 - val_loss: 0.0440 - val_acc: 0.9922 Epoch 61/100 60000/60000 [==============================] - 1s - loss: 4.6596e-05 - acc: 1.0000 - val_loss: 0.0429 - val_acc: 0.9933 Epoch 62/100 60000/60000 [==============================] - 1s - loss: 1.4470e-05 - acc: 1.0000 - val_loss: 0.0430 - val_acc: 0.9934 Epoch 63/100 60000/60000 [==============================] - 1s - loss: 1.0095e-05 - acc: 1.0000 - val_loss: 0.0432 - val_acc: 0.9933 Epoch 64/100
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
Load PPI and Targets
PPI = nx.read_gml('../data/CheckBestTargetSet/Human_Interactome.gml')
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Load all the different drug targets from the various sources
#Dictionary with the CLOUD : targets targets_DrugBank = {} targets_DrugBank_Filtered = {} targets_Pubchem = {} targets_Pubchem_Filtered = {} targets_Chembl = {} targets_Chembl_Filtered = {} targets_All_Filtered = {} targets_All = {} #Get all extracted targets (with the DrugBank target split) targets_only = set() fp = open('../data/CheckBestTargetSet/TargetSets/CLOUD_to_TargetsSplit.csv') fp.next() for line in fp: tmp = line.strip().split(',') targets_All_Filtered[tmp[0]] = [x for x in tmp[1].split(';') if x != ''] targets_only.update([x for x in tmp[1].split(';') if x != '']) targets_All[tmp[0]] = [x for x in tmp[1].split(';') if x != ''] targets_All[tmp[0]].extend([x for x in tmp[2].split(';') if x != '']) targets_All[tmp[0]].extend([x for x in tmp[3].split(';') if x != '']) targets_All[tmp[0]].extend([x for x in tmp[4].split(';') if x != '']) fp.close() # # DRUGBANK # fp = open('../data/CheckBestTargetSet/TargetSets/CLOUD_DrugBank_Targets.csv') fp.next() for line in fp: tmp = line.strip().split(',') targets_DrugBank[tmp[0]] = [x for x in tmp[2].split(';') if x != ''] targets_DrugBank_Filtered[tmp[0]] = [x for x in tmp[2].split(';') if x != '' and x in targets_All_Filtered[tmp[0]]] fp.close() # # PUBCHEM # fp = open('../data/CheckBestTargetSet/TargetSets/CLOUD_PubChem_Targets.csv') fp.next() for line in fp: tmp = line.strip().split(',') targets_Pubchem[tmp[0]] = [x for x in tmp[2].split(';') if x != ''] targets_Pubchem_Filtered[tmp[0]] = [x for x in tmp[2].split(';') if x != '' and x in targets_All_Filtered[tmp[0]]] fp.close() # # CHEMBL # fp = open('../data/CheckBestTargetSet/TargetSets/CLOUD_ChEMBL_Targets.csv') fp.next() for line in fp: tmp = line.strip().split(',') targets_Chembl[tmp[0]] =[x for x in tmp[2].split(';') if x != ''] targets_Chembl_Filtered[tmp[0]] = [x for x in tmp[2].split(';') if x != '' and x in targets_All_Filtered[tmp[0]]] fp.close() #Make a list with all clouds all_Clouds = targets_All.keys() all_Clouds.sort()
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate the various distance measurements
saved_distances = {} def Check_Drug_Module_Diameter(PPI,targets): ''' Extract the min path between targets (=Diameter) This is always the minimum path between one target and any other target of the same set. Returns Mean of all paths (d_d) as well as paths (min_paths) This function uses only one set hence calulcates the intra drug distance or drug_module diamter ''' filtered_targets = [] for t in targets: if PPI.has_node(t): filtered_targets.append(t) min_paths = [] if len(filtered_targets) > 1: try: for t1 in filtered_targets: min_distances = [] for t2 in filtered_targets: if t1 != t2: #print nx.shortest_path(PPI,t1,t2) if saved_distances.has_key(t1+','+t2): min_distances.append(saved_distances[t1+','+t2]) elif saved_distances.has_key(t2+','+t1): min_distances.append(saved_distances[t2+','+t1]) elif nx.has_path(PPI,t1,t2): dist_path_length = len(nx.shortest_path(PPI,t1,t2))-1 min_distances.append(dist_path_length) saved_distances[t1+','+t2] = dist_path_length min_paths.append(min(min_distances)) d_d = sum(min_paths)/float(len(filtered_targets)) return d_d except: return "None" else: return 0 def Check_Shortest_DistancesBetween(PPI, targets1, targets2): ''' Extract the min path between targets. This is always the minimum path between one target and any other target of the other set. Returns Mean of all paths (d_d) as well as paths (min_paths) This function uses two sets hence calulcates the inter drug distance ''' filtered_targets = [] for t in targets1: if PPI.has_node(t): filtered_targets.append(t) filtered_targets2 = [] for t in targets2: if PPI.has_node(t): filtered_targets2.append(t) min_paths = [] if len(filtered_targets) >= 1 and len(filtered_targets2) >= 1: try: for t1 in filtered_targets: min_distances = [] for t2 in filtered_targets2: # print nx.shortest_path(PPI,t1,t2) if saved_distances.has_key(t1+','+t2): min_distances.append(saved_distances[t1+','+t2]) elif saved_distances.has_key(t2+','+t1): min_distances.append(saved_distances[t2+','+t1]) elif nx.has_path(PPI,t1,t2): dist_path_length = len(nx.shortest_path(PPI,t1,t2))-1 min_distances.append(dist_path_length) saved_distances[t1+','+t2] = dist_path_length if len(min_distances) != 0: min_paths.append(min(min_distances)) return min_paths except: return 'None' else: return 'None' def calculate_ClosestDistance(PPI,targets1, targets2 ): ''' Add information here ''' filtered_targets = [] for t in targets1: if PPI.has_node(t): filtered_targets.append(t) filtered_targets2 = [] for t in targets2: if PPI.has_node(t): filtered_targets2.append(t) distances = [] if len(filtered_targets) > 0 and len(filtered_targets2) > 0: for t1 in filtered_targets: tmp = [] for t2 in filtered_targets2: if saved_distances.has_key(t1+','+t2): tmp.append(saved_distances[t1+','+t2]) elif saved_distances.has_key(t2+','+t1): tmp.append(saved_distances[t2+','+t1]) elif nx.has_path(PPI,t1,t2): dist_path_length = len((nx.shortest_path(PPI, source=t1, target=t2))) - 1 tmp.append(dist_path_length) saved_distances[t1+','+t2] = dist_path_length if len(tmp) != 0: distances.append(min(tmp)) if len(distances) == 0: result = 'None' else: result = np.mean(distances) return result def calculate_MeanDistance(PPI,targets1, targets2 ): ''' Add information here ''' filtered_targets = [] for t in targets1: if PPI.has_node(t): filtered_targets.append(t) filtered_targets2 = [] for t in targets2: if PPI.has_node(t): filtered_targets2.append(t) distances = [] for t1 in filtered_targets: for t2 in filtered_targets2: if saved_distances.has_key(t1+','+t2): distances.append(saved_distances[t1+','+t2]) elif saved_distances.has_key(t2+','+t1): distances.append(saved_distances[t2+','+t1]) elif nx.has_path(PPI,t1,t2): dist_path_length = len((nx.shortest_path(PPI, source=t1, target=t2))) - 1 distances.append(dist_path_length) saved_distances[t1+','+t2] = dist_path_length if len(distances) > 0: result = np.mean(distances) else: result = 'None' return result
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate All Distances
dic_target_sets = {'DrugBank':targets_DrugBank, 'PubChem':targets_Pubchem, 'Chembl':targets_Chembl,'DrugBank_Filtered':targets_DrugBank_Filtered, 'PubChem_Filtered':targets_Pubchem_Filtered, 'Chembl_Filtered':targets_Chembl_Filtered, 'All_Filtered':targets_All_Filtered, 'All':targets_All} for key in dic_target_sets: print key #Open corresponding result file fp_out = open('../results/CheckBestTargetSet/'+key+'.csv','w') fp_out.write('Drug1,Drug2,d_A,d_B,d_AB,s_AB,AB_Min,AB_Mean\n') #Go though all pairs for cloud1 in all_Clouds: print cloud1 #Targets of drug A targets1 = dic_target_sets[key][cloud1] #Diameter of drug A d_A = Check_Drug_Module_Diameter(PPI, targets1) for cloud2 in all_Clouds: #only calculate the half matrix if cloud1 < cloud2: #targets of drug B targets2 = dic_target_sets[key][cloud2] #Diameter of drug B d_B = Check_Drug_Module_Diameter(PPI, targets2) #Min distance from A to B distances1 = Check_Shortest_DistancesBetween(PPI, targets1, targets2) #Min distance from B to A distances2 = Check_Shortest_DistancesBetween(PPI, targets2, targets1) if distances1 != "None" and distances2 != 'None': #Dab between_Distance = (sum(distances1)+sum(distances2))/float((len(distances1)+len(distances2))) else: between_Distance = "None" if d_A != "None" and d_B != 'None' and between_Distance != "None": #Sab separation = between_Distance - (d_A+d_B)/2.0 else: separation = 'None' #Create AB_Min min_Distance = calculate_ClosestDistance(PPI, targets1, targets2) #Create AB_Mean mean_Distance = calculate_MeanDistance(PPI, targets1, targets2) #Save results fp_out.write(cloud1+','+cloud2+','+str(d_A)+','+str(d_B)+','+str(between_Distance)+','+str(separation)+','+str(min_Distance)+','+str(mean_Distance)+'\n') fp_out.close()
DrugBank CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 PubChem CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 Chembl_Filtered CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 DrugBank_Filtered CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 Chembl CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 PubChem_Filtered CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 All_Filtered CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267 All CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD035 CLOUD036 CLOUD037 CLOUD038 CLOUD039 CLOUD040 CLOUD041 CLOUD042 CLOUD043 CLOUD044 CLOUD045 CLOUD046 CLOUD047 CLOUD048 CLOUD049 CLOUD050 CLOUD051 CLOUD052 CLOUD053 CLOUD054 CLOUD055 CLOUD056 CLOUD057 CLOUD058 CLOUD059 CLOUD060 CLOUD061 CLOUD062 CLOUD063 CLOUD064 CLOUD065 CLOUD066 CLOUD067 CLOUD068 CLOUD069 CLOUD070 CLOUD071 CLOUD072 CLOUD073 CLOUD074 CLOUD075 CLOUD076 CLOUD077 CLOUD078 CLOUD079 CLOUD080 CLOUD081 CLOUD082 CLOUD083 CLOUD084 CLOUD085 CLOUD086 CLOUD087 CLOUD088 CLOUD089 CLOUD090 CLOUD091 CLOUD092 CLOUD093 CLOUD094 CLOUD095 CLOUD096 CLOUD097 CLOUD098 CLOUD099 CLOUD100 CLOUD101 CLOUD102 CLOUD103 CLOUD104 CLOUD105 CLOUD106 CLOUD107 CLOUD108 CLOUD109 CLOUD110 CLOUD111 CLOUD112 CLOUD113 CLOUD114 CLOUD115 CLOUD116 CLOUD117 CLOUD118 CLOUD119 CLOUD120 CLOUD121 CLOUD122 CLOUD123 CLOUD124 CLOUD125 CLOUD126 CLOUD127 CLOUD128 CLOUD129 CLOUD130 CLOUD131 CLOUD132 CLOUD133 CLOUD134 CLOUD135 CLOUD136 CLOUD137 CLOUD138 CLOUD139 CLOUD140 CLOUD141 CLOUD142 CLOUD143 CLOUD144 CLOUD145 CLOUD146 CLOUD147 CLOUD148 CLOUD149 CLOUD150 CLOUD151 CLOUD152 CLOUD153 CLOUD154 CLOUD155 CLOUD156 CLOUD157 CLOUD158 CLOUD159 CLOUD160 CLOUD161 CLOUD162 CLOUD163 CLOUD164 CLOUD165 CLOUD166 CLOUD167 CLOUD168 CLOUD169 CLOUD170 CLOUD171 CLOUD172 CLOUD173 CLOUD174 CLOUD175 CLOUD176 CLOUD177 CLOUD178 CLOUD179 CLOUD180 CLOUD181 CLOUD182 CLOUD183 CLOUD184 CLOUD185 CLOUD186 CLOUD187 CLOUD188 CLOUD189 CLOUD190 CLOUD191 CLOUD192 CLOUD193 CLOUD194 CLOUD195 CLOUD196 CLOUD197 CLOUD198 CLOUD199 CLOUD200 CLOUD201 CLOUD202 CLOUD203 CLOUD204 CLOUD205 CLOUD206 CLOUD207 CLOUD208 CLOUD209 CLOUD210 CLOUD211 CLOUD212 CLOUD213 CLOUD214 CLOUD215 CLOUD216 CLOUD217 CLOUD218 CLOUD219 CLOUD220 CLOUD221 CLOUD222 CLOUD223 CLOUD224 CLOUD225 CLOUD226 CLOUD227 CLOUD228 CLOUD229 CLOUD230 CLOUD231 CLOUD232 CLOUD233 CLOUD234 CLOUD235 CLOUD236 CLOUD237 CLOUD238 CLOUD239 CLOUD240 CLOUD241 CLOUD242 CLOUD243 CLOUD244 CLOUD245 CLOUD246 CLOUD247 CLOUD248 CLOUD249 CLOUD250 CLOUD251 CLOUD252 CLOUD253 CLOUD254 CLOUD255 CLOUD256 CLOUD257 CLOUD258 CLOUD259 CLOUD260 CLOUD261 CLOUD262 CLOUD263 CLOUD264 CLOUD265 CLOUD266 CLOUD267
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate the different metrics for the different target setsTargetSets: All, Chembl, PubChem, DrugBank (all associations and target only filtered) Metrics: S_AB, D_AB, Min_AB and Mean_AB
#network = nx.read_gml('../data/Check_Features/DrugPairFeature_Files/DPI_iS3_pS7_abMAD2_gP100/Networks/DPI_Network_CoreToPeriphery.gml') targetLists = [f for f in os.listdir('../results/CheckBestTargetSet/') if os.path.isfile(os.path.join('../results/CheckBestTargetSet/', f)) and '.csv' in f] distance_metric = {'D_AB':4, 'S_AB':5, 'Min_AB':6, 'Mean_AB':7} interaction_colors = {'Increasing':'#ACD900','Decreasing':'#F70020','Emergent':'#0096FF','All':'grey'} network_parts = ['Complete','Core','CoreToPeriphery','Periphery'] for part in network_parts: print part network = nx.read_gml('../data/CheckBestTargetSet/DrugPairFeature_Files/DPI_iS3_pS7_abMAD2_gP100/Networks/DPI_Network_'+part+'.gml') #create the directory if not existing directory = os.path.dirname('../results/CheckBestTargetSet/Results/'+part +'/') if not os.path.exists(directory): os.makedirs(directory) fp_out = open('../results/CheckBestTargetSet/Results/'+part+'/StatisticResult.csv','w') fp_out.write('Metric,TargetSet,Type1,Type2,Foldchange,Pvalue,IsSignificant\n') #Go through all metrics and target sets print 'Calculate Metrics:' for metric in distance_metric.keys(): for targetList in targetLists: #check if S_AB (as only sab has negative values) if metric != 'S_AB': distance_cutoffs = [5,4,3,2,1,0] else: distance_cutoffs = [3.5,2.5,1.5,0.5,-0.5,-1.5] #remove .csv from file name targetName = targetList.split('.')[0] #create the directory if not existing directory = os.path.dirname('../results/CheckBestTargetSet/Results/'+part +'/'+ targetName + '/') if not os.path.exists(directory): os.makedirs(directory) #create a dictionary with the respective distance for a given drug pair #all values contains all durg pair values (needed for normalization later) all_values = [] fp = open('../results/CheckBestTargetSet/' + targetList,'r') fp.next() drugpairs = {} for line in fp: tmp = line.strip().split(',') value = tmp[distance_metric[metric]] #print tmp drugpairs[tmp[0]+','+tmp[1]] = value drugpairs[tmp[1]+','+tmp[0]] = value if value != "None": all_values.append(float(value)) #Split info into the various interaction types interaction_types = ['Increasing','Decreasing','Emergent','All'] interaction_type_results = {} for it in interaction_types: #binarize the data into the correspodning bins; normalize is used to later take care of the fact that most interaction have a distance around 2 results = {} to_normalize = {} interaction_type_results[it] = [] #Go through the cutoffs for i in range(1, len(distance_cutoffs)): #this will contain the actual results; integer later number of interaction within this distance results[distance_cutoffs[i]] = 0 #get the corresponding results to_normalize[distance_cutoffs[i]] = len([x for x in all_values if x < distance_cutoffs[i-1] and x >= distance_cutoffs[i]]) #Go though all edges of the certain network and add to bin if existing for edge in network.edges(): for key in network[edge[0]][edge[1]]: if network[edge[0]][edge[1]][key]['Type'] != it and it != 'All' : continue value = drugpairs.get(edge[0]+','+edge[1],'None') if value != "None": value = float(value) interaction_type_results[it].append(value) if value >= distance_cutoffs[i] and value < distance_cutoffs[i-1]: results[distance_cutoffs[i]] += 1 ''' PLOT OUTPUT ''' sorted_distance_cutOffs = list(distance_cutoffs) sorted_distance_cutOffs.sort() #PLOT THE INDIVDIUAL BAR PLOT WITH X-AXIS = PPI DISTANCE AND Y-AXIS FREQUENCY plt.bar([i for i in sorted_distance_cutOffs[:-1] if to_normalize[i] != 0],[results[i]/float(to_normalize[i]) for i in sorted_distance_cutOffs[:-1] if to_normalize[i] != 0], color=interaction_colors[it]) plt.xlabel('PPI ' + metric) plt.ylabel('Percent of all drug pairs within this distance') plt.savefig('../results/CheckBestTargetSet/Results/'+part+'/' + targetName + '/'+metric+'_'+it+'_PPI_Distances.pdf', bbox_inches = "tight") plt.close() #plt.show() #quick bug solution (only happens once in the periphery part and not important) if len(interaction_type_results['Decreasing']) == 0: interaction_type_results['Decreasing'].append(2) #PLOT A BOX PLOT WITH THE VARIOUS INTERACTION TYPES AS DIFFERENCE bplot = sns.boxplot(data=[all_values,interaction_type_results['All'],interaction_type_results['Increasing'],interaction_type_results['Decreasing'],interaction_type_results['Emergent']],orient='h', showfliers = False) interaction_types_2 = ['All','Interacting','Increasing','Decreasing','Emergent'] interaction_colors_2 = ['grey','#F8B301','#ACD900','#F70020','#0096FF'] color_dict = dict(zip(interaction_types_2, interaction_colors_2)) for i in range(0,5): mybox = bplot.artists[i] mybox.set_facecolor(color_dict[interaction_types_2[i]]) interaction_type_results['AllPairs'] = all_values for key1 in interaction_type_results: for key2 in interaction_type_results: if key1 > key2: pval = mu(interaction_type_results[key2],interaction_type_results[key1])[1] is_significant = pval < 0.05 foldchange = np.mean(interaction_type_results[key2])/np.mean(interaction_type_results[key1]) fp_out.write(metric+','+targetName+','+key1+',' +key2 +','+str(foldchange)+',' + str(pval)+','+str(is_significant) + '\n') plt.yticks(range(0,5),['All','Interacting','Increasing','Decreasing','Emergent']) plt.ylabel('Interaction Type') plt.tick_params(axis = 'y', which = 'major', labelsize = 5) plt.xlabel(metric) plt.savefig('../results/CheckBestTargetSet/Results/'+part +'/'+ targetName + '/'+metric+'_InteractionDifference.pdf', bbox_inches = "tight") plt.close() fp_out.close() print 'Done'
Complete Calculate Metrics: Done Core Calculate Metrics: Done CoreToPeriphery Calculate Metrics: Done Periphery Calculate Metrics: Done
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Analyse the result file
interaction_types = ['Increasing','Decreasing','Emergent'] network_parts = ['Complete','Core','CoreToPeriphery','Periphery'] for part in network_parts: print part results = {} fp = open('../results/CheckBestTargetSet/Results/'+part+'/StatisticResult.csv','r') fp.next() for line in fp: tmp = line.strip().split(',') if results.has_key(tmp[0]) == False: results[tmp[0]] = {} if results[tmp[0]].has_key(tmp[1]) == False: results[tmp[0]][tmp[1]] = 0 if tmp[2] in interaction_types and tmp[3] in interaction_types: if tmp[6] == 'True': results[tmp[0]][tmp[1]] += 1 #print tmp for metric in results: print '\t' + metric for targetSet in results[metric]: if results[metric][targetSet] == 3: print '\t\t' + targetSet
Complete Min_AB DrugBank_Filtered Mean_AB PubChem_Filtered D_AB Chembl_Filtered Chembl PubChem_Filtered S_AB PubChem Core Min_AB DrugBank Mean_AB Chembl_Filtered Chembl D_AB S_AB CoreToPeriphery Min_AB All_Filtered All DrugBank PubChem Chembl_Filtered Chembl PubChem_Filtered Mean_AB All_Filtered All PubChem Chembl_Filtered Chembl PubChem_Filtered D_AB All_Filtered All PubChem Chembl_Filtered DrugBank_Filtered Chembl PubChem_Filtered S_AB Chembl_Filtered Chembl Periphery Min_AB Mean_AB All_Filtered DrugBank PubChem DrugBank_Filtered Chembl D_AB All_Filtered S_AB DrugBank PubChem PubChem_Filtered
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Plot S_AB distribution
import seaborn as sns targetLists = [f for f in os.listdir('../results/Check_Features/CheckBestTargetSet/') if os.path.isfile(os.path.join('../results/Check_Features/CheckBestTargetSet/', f)) and '.csv' in f] distance_metric = {'D_AB':4, 'S_AB':5, 'Min_AB':6, 'Mean_AB':7} metric = 'S_AB' for targetList in targetLists: fp = open('../results/Check_Features/CheckBestTargetSet/' + targetList,'r') fp.next() all_values = [] for line in fp: tmp = line.strip().split(',') value = tmp[distance_metric[metric]] if value != "None": all_values.append(float(value)) print np.mean(all_values) plt.title(targetList.split('.')[0]) #plt.yscale('log') # plt.fill([0, 0, max(all_values), max(all_values)], [0, 0.625, 0.625, 0], color='lightgrey', alpha=0.4) plt.hist(all_values,bins=12, density= True, color='#40B9D4',edgecolor="#40B9D4", linewidth=0.0, alpha=0.5) plt.xlabel('S_AB') plt.ylabel('Frequency') #plt.ylim([0.00000001,1]) #plt.yscale('log', nonposy='clip') #plt.xscale('log') #plt.show() plt.yscale('log') plt.savefig('../results/Check_Features/CheckBestTargetSet/Results/S_AB_Distributions/'+targetList.split('.')[0]+'.pdf', format = 'pdf', dpi=800) plt.close()
0.6722009834273841 1.3609922810737909 0.6663973106768771 1.4210949885061646 0.515554244097155 0.6616415751265295 0.2801638381785182 1.4125882193782637
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Specutils Analysis![Specutils: An Astropy Package for Spectroscopy](data/specutils_logo.png)This notebook provides an overview of some of the spectral analysis capabilities of the Specutils Astropy coordinated package. While this notebook is intended as an interactive introduction to specutils at the time of its writing, the canonical source of information for the package is the latest version's documentation: https://specutils.readthedocs.ioNote that the below assumes you have knowledge of the material in the [overview notebook](Specutils_overview.ipynb). If this is not the case you may wish to review that notebook before proceding here. ImportsWe start with some fundamental imports for working with specutils and simple visualization of spectra:
import numpy as np import astropy.units as u import specutils from specutils import Spectrum1D, SpectralRegion specutils.__version__ # for plotting: %matplotlib inline import matplotlib.pyplot as plt # for showing quantity units on axes automatically: from astropy.visualization import quantity_support quantity_support();
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Sample Spectrum and SNRFor use below, we also load the sample SDSS spectrum downloaded in the [overview notebook](Specutils_overview.ipynb). See that notebook if you have not yet downloaded this spectrum.
sdss_spec = Spectrum1D.read('data/sdss_spectrum.fits', format='SDSS-III/IV spec') plt.step(sdss_spec.wavelength, sdss_spec.flux);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Because this example file already has uncertainties, it is straightforward to use one of the fundamental quantifications of a spectrum: the whole-spectrum signal-to-noise ratio:
from specutils import analysis analysis.snr(sdss_spec)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Spectral RegionsMost analysis required on a spectrum requires specification of a part of the spectrum - e.g., a spectral line. Because such regions may have value independent of a particular spectrum, they are represented as objects distrinct from a given spectrum object. Below we outline a few ways such regions are specified.
ha_region = SpectralRegion((6563-50)*u.AA, (6563+50)*u.AA) ha_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Regions can also be raw pixel values (although of course this is more applicable to a specific spectrum):
pixel_region = SpectralRegion(2100*u.pixel, 2600*u.pixel) pixel_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Additionally, *multiple* regions can be in the same `SpectralRegion` object. This is useful for e.g. measuring multiple spectral features in one call:
HI_wings_region = SpectralRegion([(1.4*u.GHz, 1.41*u.GHz), (1.43*u.GHz, 1.44*u.GHz)]) HI_wings_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
While regions are useful for a variety of analysis steps, fundamentally they can be used to extract sub-spectra from larger spectra:
from specutils.manipulation import extract_region subspec = extract_region(sdss_spec, pixel_region) plt.step(subspec.wavelength, subspec.flux) analysis.snr(subspec)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Line MeasurementsWhile line-fitting (detailed more below) is a good choice for high signal-to-noise spectra or when detailed kinematics are desired, more empirical measures are often used in the literature for noisier spectra or just simpler analysis procedures. Specutils provides a set of functions to provide these sorts of measurements, as well as similar summary statistics about spectral regions. The [analysis part of the specutils documentation](https://specutils.readthedocs.io/en/latest/analysis.html) provides a full list and detailed examples of these, but here we demonstrate some example cases. Note: these line measurements generally assume your spectrum is continuum-subtracted or continuum-normalized. Some spectral pipelines do this for you, but often this is not the case. For our examples here we will do this step "by-eye", but for a more detailed discussion of continuum modeling, see the next section. Based on the above plot we estimate a continuum level for the area of the SDSS spectrum around the H-alpha emission line, and use basic math to construct the continuum-normalized and continuum-subtracted spectra.
# estimate a reasonable continuum-level estimate for the h-alpha area of the spectrum sdss_continuum = 205*subspec.flux.unit sdss_halpha_contsub = extract_region(sdss_spec, ha_region) - sdss_continuum plt.axhline(0, c='k', ls=':') plt.step(sdss_halpha_contsub.wavelength, sdss_halpha_contsub.flux) plt.ylim(-50, 50)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
With the continuum level identified, we can now make some measurements of the spectral lines that are apparent by eye - in particular we will focus on the H-alpha emission line. While there are techniques for identifying the line automatically (see the fitting section below), here we assume we are doing "quick-look" procedures where manual identification is possible. In the cell below, lill in the `` and `` values to make a spectral region that just encompasses the H-alpha line (the middle of the three lines).
line_region = SpectralRegion(<LOWER>*u.angstrom, <UPPER>*u.angstrom) plt.step(sdss_halpha_contsub.wavelength, sdss_halpha_contsub.flux) yl1, yl2 = plt.ylim() plt.fill_between([halpha_lines_region.lower, halpha_lines_region.upper], yl1, yl2, alpha=.2) plt.ylim(yl1, yl2)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
You can now call a variety of analysis functions on the continuum-subtracted spectrum to estimate various properties of the line:
analysis.centroid(sdss_halpha_contsub, halpha_lines_region) analysis.gaussian_fwhm(sdss_halpha_contsub, halpha_lines_region) analysis.line_flux(sdss_halpha_contsub, halpha_lines_region)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Equivalent width, being a continuum dependent property, can either be computed directly from the spectrum if the continuum level is given, or measured on a continuum-normalized spectrum. The latter is mainly useful if the continuum is non-uniform over the line being measured.
analysis.equivalent_width(sdss_spec, sdss_continuum, regions=halpha_lines_region) sdss_halpha_contnorm = sdss_spec / sdss_continuum analysis.equivalent_width(sdss_halpha_contnorm, regions=halpha_lines_region)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
ExerciseLoad one of the spectrum datasets you made in the overview exercises into this notebook (i.e., your own dataset, a downloaded one, or the blackbody with an artificially added spectral feature). Make a flux or width measurement of a line in that spectrum directly. Is anything odd? Continuum SubtractionWhile continuum-fitting for spectra is sometimes thought of as an "art" as much as a science, specutils provides the tools to do a variety of approaches to continuum-fitting, without making a specific recommendation about what is "best" (since it is often very data-dependent). More details are available [in the relevant specutils doc section](https://specutils.readthedocs.io/en/latest/fitting.htmlcontinuum-fitting), but here we outline the two basic options as it stands: an "often good-enough" function, and a more customizable tool that leans on the [`astropy.modeling`](http://docs.astropy.org/en/stable/modeling/index.html) models to provide its flexibility. The "often good-enough" wayThe `fit_generic_continuum` function provides a function that is often sufficient for reasonably well-behaved continuua, particular for "quick-look" or similar applications where high precision is not that critical. The function yields a continuum model, which can be evaluated at any spectral axis value:
from specutils.fitting import fit_generic_continuum generic_continuum = fit_generic_continuum(sdss_spec) generic_continuum_evaluated = generic_continuum(sdss_spec.spectral_axis) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, generic_continuum_evaluated) plt.ylim(100, 300);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
(Note that in some versions of astropy/specutils you may see a warning that the "Model is linear in parameters" upon executing the above cell. This is not a problem unless performance is a serious concern, in which case more customization is required.)With this model in hand, continuum-subtracted or continuum-normalized spectra can be produced using basic spectral manipulations:
sdss_gencont_sub = sdss_spec - generic_continuum(sdss_spec.spectral_axis) sdss_gencont_norm = sdss_spec / generic_continuum(sdss_spec.spectral_axis) ax1, ax2 = plt.subplots(2, 1)[1] ax1.step(sdss_gencont_sub.wavelength, sdss_gencont_sub.flux) ax1.set_ylim(-50, 50) ax1.axhline(0, color='k', ls=':') # continuum should be at flux=0 ax2.step(sdss_gencont_norm.wavelength, sdss_gencont_norm.flux) ax2.set_ylim(0, 2) ax2.axhline(1, color='k', ls='--'); # continuum should be at flux=1
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
The customizable wayThe `fit_continuum` function operates similarly to `fit_generic_continuum`, but is meant for you to provide your favorite continuum model rather than being tailored to a specific continuum model. To see the list of models, see the [astropy.modeling documentation](http://docs.astropy.org/en/stable/modeling/index.html).
from specutils.fitting import fit_continuum from astropy.modeling import models
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
For example, suppose you want to use a 3rd-degree Chebyshev polynomial as your continuum model. You can use `fit_continuum` to get an object that behaves the same as for `fit_generic_continuum`:
chebdeg3_continuum = fit_continuum(sdss_spec, models.Chebyshev1D(3)) generic_continuum_evaluated = generic_continuum(sdss_spec.spectral_axis) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, chebdeg3_continuum(sdss_spec.spectral_axis)) plt.ylim(100, 300);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
This then provides total flexibility. For example, you can also try other polynomials like higher-degree Hermite polynomials:
hermdeg7_continuum = fit_continuum(sdss_spec, models.Hermite1D(degree=7)) hermdeg17_continuum = fit_continuum(sdss_spec, models.Hermite1D(degree=17)) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, chebdeg3_continuum(sdss_spec.spectral_axis)) plt.plot(sdss_spec.spectral_axis, hermdeg7_continuum(sdss_spec.spectral_axis)) plt.plot(sdss_spec.spectral_axis, hermdeg17_continuum(sdss_spec.spectral_axis)) plt.ylim(150, 250);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
This immediately demonstrates the tradeoffs in polynomial fitting: while the high-degree polynomials capture the wiggles of the spectrum better than the low, they also *over*-fit near the strong emission lines. ExerciseTry combining the `SpectralRegion` and continuum-fitting functionality to only fit the parts of the spectrum that *are* continuum (i.e. not including emission lines). Can you do better? ExerciseUsing the spectrum from the previous exercise, first subtract a continuum, then re-do your measurement. Is it better? Line-FittingIn addition to the more empirical measurements described above, `specutils` provides tools for doing spectral line fitting. The approach is akin to that for continuum modeling: models from [astropy.modeling](http://docs.astropy.org/en/stable/modeling/index.html) are fit to the spectrum, and either those models can be used directly, or their parameters.
from specutils import fitting
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
The fitting machinery must first be given guesses for line locations. This process can be automated using functions designed to identify lines (more detail on the options is [in the docs](https://specutils.readthedocs.io/en/latest/fitting.htmlline-finding)). For data sets where these algorithms are not ideal, you may substitute your own (i.e., skip this step and start with line location guesses). Here we identify the three lines near the Halpha region in our SDSS spectrum, finding the lines above about a $\sim 3 \sigma$ flux threshold. They are then output as an astropy Table:
halpha_lines = fitting.find_lines_threshold(sdss_halpha_contsub, 3) plt.step(sdss_halpha_contsub.spectral_axis, sdss_halpha_contsub.flux, where='mid') for line in halpha_lines: plt.axvline(line['line_center'], color='k', ls=':') halpha_lines
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Now for each of these lines, we need to fit a model. Sometimes it is sufficient to simply create a model where the center is at the line and excise the appropriate area of the line to do a line estimate. This is not *too* sensitive to the size of the region, at least for well-separated lines like these. The result is a list of models that carry with them them the details of the fit:
halpha_line_models = [] for line in halpha_lines: line_region = SpectralRegion(line['line_center']-5*u.angstrom, line['line_center']+5*u.angstrom) line_spectrum = extract_region(sdss_halpha_contsub, line_region) line_estimate = models.Gaussian1D(mean=line['line_center']) line_model = fitting.fit_lines(line_spectrum, line_estimate) halpha_line_models.append(line_model) plt.step(sdss_halpha_contsub.spectral_axis, sdss_halpha_contsub.flux, where='mid') for line_model in halpha_line_models: evaluated_model = line_model(sdss_halpha_contsub.spectral_axis) plt.plot(sdss_halpha_contsub.spectral_axis, evaluated_model) halpha_line_models
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops