markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
check by:
MultiDerived.__mro__ MultiDerived.mro()
_____no_output_____
MIT
Session 06 - OOP.ipynb
FOU4D/ITI-Python
Encapsulation we denote private attributes using underscore as the prefix i.e single _ or double __
class Cars: def __init__(self): self.__maxprice = 90000 def sell(self): print("Selling Price: {}".format(self.__maxprice)) def setMaxPrice(self, price): self.__maxprice = price toyota = Cars() toyota.sell() # change the price toyota.__maxprice = 1000 toyota.sell() # using setter function toyota.setMaxPrice(1000) toyota.sell()
Selling Price: 90000 Selling Price: 90000 Selling Price: 1000
MIT
Session 06 - OOP.ipynb
FOU4D/ITI-Python
polymorphism
class Parrot: def fly(self): print("Parrot can fly") def swim(self): print("Parrot can't swim") class Penguin: def fly(self): print("Penguin can't fly") def swim(self): print("Penguin can swim") # common interface def flying_test(bird): bird.fly() #instantiate objects blu = Parrot() peggy = Penguin() # passing the object flying_test(blu) flying_test(peggy)
Parrot can fly Penguin can't fly
MIT
Session 06 - OOP.ipynb
FOU4D/ITI-Python
0. Imports
# Imports from IPython.display import display, HTML import os import pandas as pd, datetime as dt, numpy as np, matplotlib.pyplot as plt from pandas.tseries.offsets import DateOffset import sys # Display options thisnotebooksys = sys.stdout pd.set_option('display.width', 1000) display(HTML("<style>.container { width:100% !important; }</style>")) pd.set_option('mode.chained_assignment', None) import mimicLOB as mlob
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
1. LOB creation
# b_tape = True means the LOB LOB = mlob.OrderBook(tick_size = 0.5, b_tape = True, # tape transactions b_tape_LOB = True, # tape lob state at each tick verbose = True)
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
2. Data- DTIME : le timestamp de l'ordre- ORDER_ID : l'identifiant de l'ordre- PRICE - QTY- ORDER_SIDE- ORDER_SIDE- ORDER_TYPE : 1 pour Market Order; 2 pour Limit Order; q pour Quote W pour Market On Open;- ACTION_TYPE : I = limit order insertion (passive); C = limit order cacnellations; R = replace order that lose priority; r = replace order that keeps priority; S = replace order that makes the order aggressive (give rise to trade); T = aggressive order (give rise to trade)- MATCH_STRATEGY : True/False- IS_OPEN_TRADE : True/False
df = pd.read_pickle(r'..\data\day20160428.pkl') df
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
3. Agents Creation
auction_config = {'orderbook' : LOB, 'id' : 'FDR', 'b_record' : False, 'historicalOrders' : df[df.DTIME.dt.hour<7]} continuousTrading_config = {'orderbook' : LOB, 'id' : 'FDR', 'b_record' : False, 'historicalOrders' : df[df.DTIME.dt.hour>=7]} AuctionReplayer = mlob.replayerAgent(**auction_config) ContReplayer = mlob.replayerAgent(**continuousTrading_config)
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
4. Replay orders 4.1. Auction phase The auction price shall be determined on the basis of the situation of the Central Order Book at the closing of the call phase and shall be the price which produces the highest executable order volume.
%%time # log f = open('log_auction.txt','w'); sys.stdout = f # Close auction LOB.b_auction = True #Open auction AuctionReplayer.replayOrders() # log sys.stdout = thisnotebooksys
Wall time: 94.1 ms
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
4.2. Auction is overClosing the auction will result in transactions and a new LOB with unmatched orders will be set.The price is chosen as the one that maximizes volume of transactions.Trades are executed at the auction price, and according to a time priority. The remaining orders at the auction price are the newest orders.
%%time # log f = open('log_auctionClose.txt','w'); sys.stdout = f LOB.b_auction = False # log sys.stdout = thisnotebooksys
Wall time: 12.4 ms
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
4.3. Lob StateLOB state before opening the continuous trading
LOBstate = AuctionReplayer.getLOBState() LOBstate = LOBstate.set_index('Price').sort_index() LOBstate.plot.bar(figsize=(20, 7)) plt.show()
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
4.4. Continuous Trading
%%time # log f = open('log_continuousTrading.txt','w'); sys.stdout = f # Close auction ContReplayer.replayOrders() # log sys.stdout = thisnotebooksys
Wall time: 5min 40s
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
5. Price Tape
histoPrices = ContReplayer.getPriceTape().astype(float) histoPrices.plot(figsize=(20,7)) # OHLC display(f'open : {histoPrices.iloc[0,0]}') display(f'high : {histoPrices.max()[0]}') display(f'low : {histoPrices.min()[0]}') display(f'close : {histoPrices.iloc[-1, 0]}') plt.show()
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
Get Transaction Tape
TransactionTape = ContReplayer.getTransactionTape() TransactionTape
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
Get LOB Tape The LOB tape is the state of the LOB before each order arrival
LOBtape = AuctionReplayer.getLOBTape() LOBtape
_____no_output_____
MIT
demo/Classic/2. Demo - Simulation - Replayer.ipynb
FDR0903/mimicLOB
**Note that this notebook uses private hospita-level data, so can't be run publicly**
%load_ext autoreload %autoreload 2 import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt from os.path import join as oj import math import pygsheets import pickle as pkl import pandas as pd import seaborn as sns import plotly.express as px from collections import Counter import plotly from plotly.subplots import make_subplots import plotly.graph_objects as go import sys import json import os import inspect currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) parentdir = os.path.dirname(currentdir) sys.path.append(parentdir) sys.path.append(parentdir + '/modeling') import load_data from viz import viz_static, viz_interactive, viz_map from modeling.fit_and_predict import add_preds from functions import merge_data from functions import update_severity_index as severity_index NUM_DAYS_LIST = [1, 2, 3, 4, 5, 6, 7] df_hospital = load_data.load_hospital_level(data_dir=oj(os.path.dirname(parentdir), 'covid-19-private-data')) df_county = load_data.load_county_level(data_dir=oj(parentdir, 'data')) df_county = add_preds(df_county, NUM_DAYS_LIST=NUM_DAYS_LIST, cached_dir=oj(parentdir, 'data')) # adds keys like "Predicted Deaths 1-day" df = merge_data.merge_county_and_hosp(df_county, df_hospital)
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
severity index
df = severity_index.add_severity_index(df, NUM_DAYS_LIST) d = severity_index.df_to_plot(df, NUM_DAYS_LIST) k = 3 s_hosp = f'Predicted Deaths Hospital {k}-day' s_index = f'Severity {k}-day' print('total hospitals', df.shape[0], Counter(df[s_index])) viz_interactive.viz_index_animated(d, [1, 2, 3, 4, 5], x_key='Hospital Employees', y_key='Predicted (cumulative) deaths at hospital', hue='Severity Index', out_name=oj(parentdir, 'results', 'hosp_test.html')) viz_interactive.viz_index_animated(d, [3], by_size=False, out_name=oj('results', 'hospital_index_animated_full.html')) plt.figure(dpi=500) remap = {'High': 'red', 'Medium': 'blue', 'Low': 'green'} dr = d # d[d['Severity Index 1-day']=='Low'] plt.scatter(dr['Predicted Deaths Hospital 1-day'], dr['Surge 1-day'], s=(dr['Hospital Employees'] / 500).clip(lower=0.1), alpha=0.9, c=[remap[x] for x in dr['Severity Index 1-day']]) # plt.plot(d['Predicted Deaths Hospital 1-day'], d['Surge 1-day'], '.', ) # plt.plot(d['Predicted Deaths Hospital 1-day'], d['Surge 1-day'], '.') # plt.yscale('log') # plt.xscale('log') plt.xlim((0, 10)) plt.ylim((-1, 3)) plt.xlabel('Predicted Deaths Hospital 1-day') plt.ylabel('Surge 1-day') plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
**start with county-level death predictions**
s = f'Predicted Deaths {3}-day' # tot_deaths # s = 'tot_deaths' num_days = 1 nonzero = df[s] > 0 plt.figure(dpi=300, figsize=(7, 3)) plt.plot(df_county[s].values, '.', ms=3) plt.ylabel(s) plt.xlabel('Counties') plt.yscale('log') plt.tight_layout() plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
**look at distribution of predicted deaths at hospitals**
num_days = 1 plt.figure(dpi=300, figsize=(7, 3)) offset = 0 for i in [5, 4, 3, 2, 1]: idxs = (df[s_index] == i) plt.plot(np.arange(offset, offset + idxs.sum()), np.clip(df[idxs][s_hosp].values, a_min=1, a_max=None), '.-', label=f'{i}: {severity_index.meanings[i]}') offset += idxs.sum() plt.yscale('log') plt.ylabel(s_hosp) plt.xlabel('Hospitals') plt.legend() plt.tight_layout() plt.show() df.sort_values('Predicted Deaths Hospital 2-day', ascending=False)[['Hospital Name', 'StateName', 'Hospital Employees', 'tot_deaths', 'Predicted Deaths Hospital 2-day']].head(30)
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
adjustments **different measures of hospital size are pretty consistent**
plt.figure(dpi=500, figsize=(7, 3), facecolor='w') R, C = 1, 3 plt.subplot(R, C, 1) plt.plot(df['Hospital Employees'], df['Total Average Daily Census'], '.', alpha=0.2, markeredgewidth=0) plt.xlabel('Num Hospital Employees') plt.ylabel('Total Average Daily Census') plt.subplot(R, C, 2) plt.plot(df['Hospital Employees'], df['Total Beds'], '.', alpha=0.2, markeredgewidth=0) plt.xlabel('Num Hospital Employees') plt.ylabel('Total Beds') plt.subplot(R, C, 3) plt.plot(df['Hospital Employees'], df['ICU Beds'], '.', alpha=0.2, markeredgewidth=0) plt.xlabel('Num Hospital Employees') plt.ylabel('ICU Beds') plt.tight_layout() plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
**other measures are harder to parse...**
ks = ['Predicted Deaths Hospital 2-day', "Hospital Employees", 'ICU Beds'] R, C = 1, len(ks) plt.figure(dpi=300, figsize=(C * 3, R * 3)) for c in range(C): plt.subplot(R, C, c + 1) if c == 0: plt.ylabel('Total Occupancy Rate') plt.plot(df[ks[c]], df['Total Occupancy Rate'], '.', alpha=0.5) plt.xlabel(ks[c]) plt.tight_layout() plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
**different hospital types**
plt.figure(dpi=500, figsize=(7, 3)) R, C = 1, 3 a = 0.5 s = s_hosp plt.subplot(R, C, 1) idxs = df.IsUrbanHospital == 1 plt.hist(df[idxs][s], label='Urban', alpha=a) plt.hist(df[~idxs][s], label='Rural', alpha=a) plt.ylabel('Num Hospitals') plt.xlabel(s) plt.yscale('log') plt.legend() plt.subplot(R, C, 2) idxs = df.IsAcuteCareHospital == 1 plt.hist(df[idxs][s], label='Acute Care', alpha=a) plt.hist(df[~idxs][s], label='Other', alpha=a) plt.xlabel(s) plt.yscale('log') plt.legend() plt.subplot(R, C, 3) idxs = df.IsAcademicHospital == 1 plt.hist(df[idxs][s], label='Academic', alpha=a) plt.hist(df[~idxs][s], label='Other', alpha=a) plt.xlabel(s) plt.yscale('log') plt.legend() plt.tight_layout() plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
**rural areas have lower occupancy rates**
idxs = df.IsUrbanHospital == 1 plt.hist(df['Total Occupancy Rate'][idxs], label='urban', alpha=0.5) plt.hist(df['Total Occupancy Rate'][~idxs], label='rural', alpha=0.5) plt.xlabel('Total Occupancy Rate') plt.ylabel('Count') plt.legend() plt.show() ks = ['ICU Beds', 'Total Beds', 'Hospital Employees', 'Registered Nurses', 'ICU Occupancy Rate', 'Total Occupancy Rate', 'Mortality national comparison', 'Total Average Daily Census', # 'IsAcademicHospital', 'IsUrbanHospital', 'IsAcuteCareHospital'] # ks += [f'Predicted Deaths {n}-day' for n in NUM_DAYS_LIST] ks += [f'Predicted Deaths Hospital {n}-day' for n in NUM_DAYS_LIST] # county-level stuff # ks += ['unacast_n_grade', Hospital Employees in County', 'tot_deaths', 'tot_cases', 'PopulationDensityperSqMile2010'] viz.corrplot(df[ks], SIZE=6)
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
look at top counties/hospitals **hospitals per county**
d = df R, C = 1, 2 NUM_COUNTIES = 7 plt.figure(dpi=300, figsize=(7, 3.5)) plt.subplot(R, C, 1) c = 'County Name' county_names = d[c].unique()[:NUM_COUNTIES] num_academic_hospitals = [] # d = df[outcome_keys + hospital_keys] # d = d.sort_values('New Deaths', ascending=False) for county in county_names: num_academic_hospitals.append(d[d[c] == county].shape[0]) plt.barh(county_names[::-1], num_academic_hospitals[::-1]) # reverse to plot top down plt.xlabel('Number academic hospitals\n(for hospitals where we have data)') plt.subplot(R, C, 2) plt.barh(df_county.CountyName[:NUM_COUNTIES].values[::-1], df_county['Hospital Employees in County'][:NUM_COUNTIES][::-1]) # reverse to plot top down plt.xlabel('# Hospital Employees') plt.tight_layout() plt.show() county_names = d[c].unique()[:NUM_COUNTIES] R, C = 4, 1 plt.figure(figsize=(C * 3, R * 3), dpi=200) for i in range(R * C): plt.subplot(R, C, i + 1) cn = county_names[i] dc = d[d[c] == cn] plt.barh(dc['Hospital Name'][::-1], dc['Hospital Employees'][::-1]) plt.title(cn) plt.xlabel('# Hospital Employees') plt.tight_layout() # plt.subplots_adjust(bottom=1) plt.show()
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
Hospital severity map
counties_json = json.load(open(oj(parentdir, "data", "geojson-counties-fips.json"), "r")) viz_map.plot_hospital_severity_slider( df, df_county=df_county, counties_json=counties_json, dark=False, filename = oj(parentdir, "results", "severity_map.html") )
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
hospital contact info gsheet
ks_orig = ['countyFIPS', 'CountyName', 'Total Deaths Hospital', 'Hospital Name', 'CMS Certification Number', 'StateName', 'System Affiliation'] ks_contact = ['Phone Number', 'Hospital Employees', 'Website', 'Number to Call (NTC)', 'Donation Phone Number', 'Donation Email', 'Notes'] def write_to_gsheets_contact(df, ks_output, sheet_name='Contact Info', service_file='creds.json'): d = df[ks_output].fillna('') print('writing to gsheets...') gc = pygsheets.authorize(service_file=service_file) sh = gc.open(sheet_name) # name of the hospital wks = sh[0] #select a sheet wks.update_value('A1', "Last updated Apr 14") wks.set_dataframe(d, (3, 1)) #update the first sheet with df, starting at cell B2. write_to_gsheets_contact(df, ks_output=ks_orig + ks_contact)
_____no_output_____
MIT
modeling/hospital_quickstart.ipynb
Alicegif/covid19-severity-prediction
import numpy as np def BSM_characteristic_function(v, x0, T, r, sigma): cf_value = np.exp(((x0 / T + r - 0.5 * sigma ** 2) * 1j * v - 0.5 * sigma ** 2 * v ** 2) * T) return cf_value def BSM_call_characteristic_function(v,alpha, x0, T, r, sigma): res=np.exp(-r*T)/((alpha+1j*v)*(alpha+1j*v+1))\ *BSM_characteristic_function((v-(alpha+1)*1j), x0, T, r, sigma) return res def SimpsonW(N,eta): delt = np.zeros(N, dtype=np.float) delt[0] = 1 j = np.arange(1, N + 1, 1) SimpsonW = eta*(3 + (-1) ** j - delt) / 3 return SimpsonW def Simposon_numerical_integrate(S0, K, T, r, sigma): k = np.log(K) x0 = np.log(S0) N=1024 B=153.6 eta=B/N W=SimpsonW(N,eta) alpha=1.5 sumx=0 for j in range(N): v_j=j*eta temp=np.exp(-1j*v_j*k)*\ BSM_call_characteristic_function(v_j,alpha, x0, T, r, sigma)*\ W[j] sumx+=temp.real return sumx*np.exp(-alpha*k)/np.pi S0 = 100.0 # index level K = 108.52520983216910821762196480844 # option strike T = 1.0 # maturity date r = 0.0475 # risk-less short rate sigma = 0.2 # volatility print ('>>>>>>>>>>FT call value is ' + str(Simposon_numerical_integrate(S0, K, T, r, sigma))) %cd~ !git clone https://github.com/hhk54250/20MA573-HHK.git pass %cd 20MA573-HHK/src/ %ls from bsm import * '''=============== Test bsm_price =================''' gbm1 = Gbm( init_state = 100., drift_ratio = .0475, vol_ratio = .2) option1 = VanillaOption( otype = 1, strike = 108.52520983216910821762196480844, maturity = 1. ) print('>>>>>>>>>>BSM call value is ' + str(gbm1.bsm_price(option1))) def fft(FFTFunc): N=2**10 eta=0.15 lambda_ = 2 * np.pi / (N *eta) t=np.arange(0, N, 1) sumy=np.asarray([np.sum(np.exp(-1j*lambda_*eta*t*m)*FFTFunc) for m in range(N)]) return sumy def BSM_call_value_FFT(S0, K, T, r, sigma): k = np.log(K) x0 = np.log(S0) N =2**10 alpha=1.5 eta=0.15 lambda_ = 2 * np.pi / (N *eta) beta=x0-lambda_*N/2 km=np.asarray([beta+i*lambda_ for i in range(N)]) W=SimpsonW(N,eta) v=np.asarray([i*eta for i in range(N)]) Psi=np.asarray([BSM_call_characteristic_function(vj,alpha, x0, T, r, sigma) for vj in v]) FFTFunc=Psi*np.exp(-1j*beta*v)*W y=fft(FFTFunc).real cT=np.exp(-alpha*km)*y/np.pi return cT S0 = 100.0 # index level K = 110.0 # option strike T = 1.0 # maturity date r = 0.0475 # risk-less short rate sigma = 0.2 # volatility print('>>>>>>>>>>FFT call value is ' + str(BSM_call_value_FFT(S0, K, T, r, sigma)[514])) "FFT time test" S0 = 100.0 # index level K = 110.0 # option strike T = 1.0 # maturity date r = 0.0475 # risk-less short rate sigma = 0.2 # volatility %time BSM_call_value_FFT(S0, K, T, r, sigma) "FT time test" S0 = 100.0 # index level T = 1.0 # maturity date r = 0.0475 # risk-less short rate sigma = 0.2 # volatility N =2**10 eta=0.15 lambda_ = 2 * np.pi / (N *eta) x0 = np.log(S0) beta=x0-lambda_*N/2 k=np.asarray([np.e**(beta+lambda_*n) for n in range(N)]) %time np.asarray([Simposon_numerical_integrate(S0, k[n], T, r, sigma) for n in range(N)]) "BSM time test" gbm1 = Gbm( init_state = 100., drift_ratio = .0475, vol_ratio = .2) option1 = VanillaOption( otype = 1, strike = k, maturity = 1. ) %time gbm1.bsm_price(option1) def BSM_call_value_NumpyFFT(S0, K, T, r, sigma): k = np.log(K) x0 = np.log(S0) N =2**10 alpha=1.5 eta=0.15 lambda_ = 2 * np.pi / (N *eta) beta=x0-lambda_*N/2 km=np.asarray([beta+i*lambda_ for i in range(N)]) W=SimpsonW(N,eta) v=np.asarray([i*eta for i in range(N)]) Psi=np.asarray([BSM_call_characteristic_function(vj,alpha, x0, T, r, sigma) for vj in v]) FFTFunc=Psi*np.exp(-1j*beta*v)*W y=np.fft.fft(FFTFunc).real cT=np.exp(-alpha*km)*y/np.pi "FFT time test using Numpy.FFT package" S0 = 100.0 # index level K = 110.0 # option strike T = 1.0 # maturity date r = 0.0475 # risk-less short rate sigma = 0.2 # volatility %time BSM_call_value_NumpyFFT(S0, K, T, r, sigma)
_____no_output_____
MIT
haokai/Speed_Comparison.ipynb
songqsh/Is20f
Raw data for all_snapshots, student at grade 10 and above use DataFrame `raw`
qry = ''' SELECT * from clean.all_snapshots where grade >= 10; ''' cur.execute(qry) rows = cur.fetchall() # Build dataframe from rows raw = pd.DataFrame(rows, columns=[name[0] for name in cur.description]) # Make sure student_id is an int raw['student_lookup'] = raw['student_lookup'].astype('int') raw.head() raw = raw.replace([None],np.nan) all_students = raw[(raw['grade']==12) & (~raw['school_year'].isin([2006, 2007, 2008]))].groupby(['district','school_year']).agg({'student_lookup':'count'}) withdraw_na = raw[(raw['withdraw_reason'].isna()) & (raw['grade']==12) & (~raw['school_year'].isin([2006, 2007, 2008]))] withdraw_na withdraw_na.groupby(['district','school_year']).agg({'student_lookup':'count'}) all_students.join(withdraw_na.groupby(['district','school_year']).agg({'student_lookup':'count'}), rsuffix='_na')
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Certain districts in certain years seem to not use `withdraw_reason` for grade 12 Zanesville simply lacks data outside of 2015, but other years where there is effectively total missingness on withdraw reason appear to have `graduation_date` listed for most students, and can likely be used as an imputation.
missing_by_district = all_students.join(withdraw_na.groupby(['district','school_year']).agg({'student_lookup':'count'}), rsuffix='_na') missing_by_district[missing_by_district['student_lookup_na'].notnull()]
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Raw data for withdraw reason for students in 12th grade; also includes any student with a `graduate` withdraw reason since it is possible to graduate early use DataFrame `grad_df`
qry = ''' SELECT student_lookup, grade, school_year, withdraw_reason from clean.all_snapshots where grade = 12 or withdraw_reason = 'graduate' order by student_lookup; ''' cur.execute(qry) rows = cur.fetchall() # Build dataframe from rows grad_df = pd.DataFrame(rows, columns=[name[0] for name in cur.description]) # Make sure student_id is an int grad_df['student_lookup'] = grad_df['student_lookup'].astype('int') grad_df.head() len(grad_df) grad_df['withdraw_reason'].replace(to_replace=[None], value='Missing', inplace=True) cnt_withdraw = grad_df.groupby(['school_year','withdraw_reason']).agg({'student_lookup':'count'}) # Withdraw reasons for 12th graders by year cnt_withdraw.unstack(0).replace(np.nan, 0).astype('int')
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
All (student, school_year) entering grade 10
# Gets all students entering grade 10 at school year qry = ''' SELECT distinct student_lookup, grade, school_year from clean.all_snapshots where grade = 10 order by student_lookup; ''' cur.execute(qry) rows = cur.fetchall() # Build dataframe from rows df = pd.DataFrame(rows, columns=[name[0] for name in cur.description]) # Make sure student_id is an int df['student_lookup'] = df['student_lookup'].astype('int') df.head()
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Links the future "withdraw reason" in grade 12 to the student entering 10th grade Use DataFrame `grd_10`
# Left join means it keeps all 10th grade students, even if they didn't appear in the grad_df grd_10 = pd.merge(df, grad_df, how='left', on='student_lookup') grd_10 grd_10.columns = ['student_lookup', 'grade_10', 'yr_grade_10', 'grade_12', 'yr_grade_12', 'grade_12_withdraw'] grd_10 grd_10.groupby(['yr_grade_10', 'grade_12_withdraw']).agg({'student_lookup':'count'}).unstack(0).replace(np.nan, 0).astype('int')
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Data obtained by the View (sketch.hs_withdraw_info) WITHOUT further deduplication (grade 10) Use DataFrame `hs_w`
qry = ''' SELECT * from sketch.hs_withdraw_info WHERE grade=10 and entry_year BETWEEN 2007 AND 2013; ''' cur.execute(qry) rows = cur.fetchall() # Build dataframe from rows hs_w = pd.DataFrame(rows, columns=[name[0] for name in cur.description]) # Make sure student_id is an int hs_w ['student_lookup'] = hs_w['student_lookup'].astype('int') hs_w[:10] grd_10[grd_10['student_lookup']==47] # grd_10 students entering 2007-2013 (our cohorts of interest) len(grd_10[grd_10['yr_grade_10'].isin(list(range(2007,2014)))])
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Current data retrieval
cur.execute(''' select * from ( SELECT *, ROW_NUMBER() OVER (PARTITION BY student_lookup, grade ORDER BY student_lookup) AS rnum FROM sketch.hs_withdraw_info hwi) t where t.rnum = 1 and t.grade = 10 and t.entry_year >= 2007 and t.entry_year <= 2013 and ((t.grad_year is not null or t.dropout_year is not null) or (t.transfer_out_year is null)) and ((t.grad_year is not null or t.dropout_year is not null) or (t.in_state_transfer_year is null)); ''') rows = cur.fetchall() # Build dataframe from rows existing = pd.DataFrame(rows, columns=[name[0] for name in cur.description]) # Make sure student_id is an int existing['student_lookup'] = existing['student_lookup'].astype('int') existing list(range(2007,2014))
_____no_output_____
MIT
eda/withdraw_reason.ipynb
Karunya-Manoharan/High-school-drop-out-prediction
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates.
from bookutils import YouTubeVideo YouTubeVideo("UJTf7cW0idI")
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in [the chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb).
import bookutils
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer()` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from [the chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods starting or ending in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythonimport astortree, fitness = repairer.repair()print(astor.to_source(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer attempts to repair the invoked function (`middle()`). The returned AST `tree` can be output via `astor.to_source()`:```python>>> middle_repairer = Repairer(middle_debugger)>>> tree, fitness = middle_repairer.repair()>>> print(astor.to_source(tree), fitness)def middle(x, y, z): if y < z: if x < z: if x < y: return y else: return x elif x > y: return y elif x > z: return x return z 1.0```Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates.![](PICS/Repairer-synopsis-1.svg) Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e, how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:
from StatisticalDebugger import middle # ignore from bookutils import print_content # ignore import inspect # ignore _, first_lineno = inspect.getsourcelines(middle) middle_source = inspect.getsource(middle) print_content(middle_source, '.py', start_line_number=first_lineno)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
In most cases, `middle()` just runs fine:
middle(4, 5, 6)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
In some other cases, though, it does not work correctly:
middle(2, 1, 3)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:
def middle_sort_of_fixed(x, y, z): # type: ignore return x
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
You will concur that the failure no longer occurs:
middle_sort_of_fixed(2, 1, 3)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The master plan for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks For better repair, we will use the test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):
from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The `middle_test()` function fails whenever `middle()` returns an incorrect result:
def middle_test(x: int, y: int, z: int) -> None: m = middle(x, y, z) assert m == sorted([x, y, z])[1] from ExpectError import ExpectError with ExpectError(): middle_test(2, 1, 3)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
from StatisticalDebugger import OchiaiDebugger, RankingDebugger middle_debugger = OchiaiDebugger() for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: with middle_debugger: middle_test(x, y, z)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We see that the upper half of the `middle()` code is definitely more suspicious:
middle_debugger
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The most suspicious line is:
# ignore location = middle_debugger.rank()[0] (func_name, lineno) = location lines, first_lineno = inspect.getsourcelines(middle) print(lineno, end="") print_content(lines[lineno - first_lineno], '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
with a suspiciousness of:
# ignore middle_debugger.suspiciousness(location)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations:
import string string.ascii_letters len(string.ascii_letters + '_') * \ len(string.ascii_letters + '_' + string.digits) * \ len(string.ascii_letters + '_' + string.digits)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. Furthermore, we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function.
import ast import astor import inspect from bookutils import print_content, show_ast def middle_tree() -> ast.AST: return ast.parse(inspect.getsource(middle)) show_ast(middle_tree())
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST.
print(ast.dump(middle_tree()))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
This is the path to the first `return` statement:
ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements to its `statements` list it finds in function definitions. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast).
from ast import NodeVisitor # ignore from typing import Any, Callable, Optional, Type, Tuple from typing import Dict, Union, Set, List, cast class StatementVisitor(NodeVisitor): """Visit all statements within function defs in an AST""" def __init__(self) -> None: self.statements: List[Tuple[ast.AST, str]] = [] self.func_name = "" self.statements_seen: Set[Tuple[ast.AST, str]] = set() super().__init__() def add_statements(self, node: ast.AST, attr: str) -> None: elems: List[ast.AST] = getattr(node, attr, []) if not isinstance(elems, list): elems = [elems] # type: ignore for elem in elems: stmt = (elem, self.func_name) if stmt in self.statements_seen: continue self.statements.append(stmt) self.statements_seen.add(stmt) def visit_node(self, node: ast.AST) -> None: # Any node other than the ones listed below self.add_statements(node, 'body') self.add_statements(node, 'orelse') def visit_Module(self, node: ast.Module) -> None: # Module children are defs, classes and globals - don't add super().generic_visit(node) def visit_ClassDef(self, node: ast.ClassDef) -> None: # Class children are defs and globals - don't add super().generic_visit(node) def generic_visit(self, node: ast.AST) -> None: self.visit_node(node) super().generic_visit(node) def visit_FunctionDef(self, node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None: if not self.func_name: self.func_name = node.name self.visit_node(node) super().generic_visit(node) self.func_name = "" def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None: return self.visit_FunctionDef(node)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class.
def all_statements_and_functions(tree: ast.AST, tp: Optional[Type] = None) -> \ List[Tuple[ast.AST, str]]: """ Return a list of pairs (`statement`, `function`) for all statements in `tree`. If `tp` is given, return only statements of that class. """ visitor = StatementVisitor() visitor.visit(tree) statements = visitor.statements if tp is not None: statements = [s for s in statements if isinstance(s[0], tp)] return statements def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]: """ Return a list of all statements in `tree`. If `tp` is given, return only statements of that class. """ return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here are all the `return` statements in `middle()`:
all_statements(middle_tree(), ast.Return) all_statements_and_functions(middle_tree(), ast.If)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We can randomly pick an element:
import random random_node = random.choice(all_statements(middle_tree())) astor.to_source(random_node)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Mutating StatementsThe main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). The constructor provides various keyword arguments to configure the mutator.
from ast import NodeTransformer import copy class StatementMutator(NodeTransformer): """Mutate statements in an AST for automated repair.""" def __init__(self, suspiciousness_func: Optional[Callable[[Tuple[Callable, int]], float]] = None, source: Optional[List[ast.AST]] = None, log: bool = False) -> None: """ Constructor. `suspiciousness_func` is a function that takes a location (function, line_number) and returns a suspiciousness value between 0 and 1.0. If not given, all locations get the same suspiciousness of 1.0. `source` is a list of statements to choose from. """ super().__init__() self.log = log if suspiciousness_func is None: def suspiciousness_func(location: Tuple[Callable, int]) -> float: return 1.0 assert suspiciousness_func is not None self.suspiciousness_func: Callable = suspiciousness_func if source is None: source = [] self.source = source if self.log > 1: for i, node in enumerate(self.source): print(f"Source for repairs #{i}:") print_content(astor.to_source(node), '.py') print() print() self.mutations = 0
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Choosing Suspicious Statements to MutateWe start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization.
import warnings class StatementMutator(StatementMutator): def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float: if not hasattr(stmt, 'lineno'): warnings.warn(f"{self.format_node(stmt)}: Expected line number") return 0.0 suspiciousness = self.suspiciousness_func((func_name, stmt.lineno)) if suspiciousness is None: # not executed return 0.0 return suspiciousness def format_node(self, node: ast.AST) -> str: ...
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen.
class StatementMutator(StatementMutator): def node_to_be_mutated(self, tree: ast.AST) -> ast.AST: statements = all_statements_and_functions(tree) assert len(statements) > 0, "No statements" weights = [self.node_suspiciousness(stmt, func_name) for stmt, func_name in statements] stmts = [stmt for stmt, func_name in statements] if self.log > 1: print("Weights:") for i, stmt in enumerate(statements): node, func_name = stmt print(f"{weights[i]:.2} {self.format_node(node)}") if sum(weights) == 0.0: # No suspicious line return random.choice(stmts) else: return random.choices(stmts, weights=weights)[0]
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Choosing a Mutation Method The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node.According to the rules of `NodeTransformer`, the mutation method can return* a new node or a list of nodes, replacing the current node;* `None`, deleting it; or* the node itself, keeping things as they are.
import re RE_SPACE = re.compile(r'[ \t\n]+') class StatementMutator(StatementMutator): def choose_op(self) -> Callable: return random.choice([self.insert, self.swap, self.delete]) def visit(self, node: ast.AST) -> ast.AST: super().visit(node) # Visits (and transforms?) children if not node.mutate_me: # type: ignore return node op = self.choose_op() new_node = op(node) self.mutations += 1 if self.log: print(f"{node.lineno:4}:{op.__name__ + ':':7} " f"{self.format_node(node)} " f"becomes {self.format_node(new_node)}") return new_node
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Swapping StatementsOur first mutator is `swap()`, which replaces the current node NODE by a random node found in `source` (using a newly defined `choose_statement()`).As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```pythonif P: BODY```we thus only insert ```pythonif P: pass```since the statements in BODY have a later chance to get inserted. The same holds for all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more.
class StatementMutator(StatementMutator): def choose_statement(self) -> ast.AST: return copy.deepcopy(random.choice(self.source)) class StatementMutator(StatementMutator): def swap(self, node: ast.AST) -> ast.AST: """Replace `node` with a random node from `source`""" new_node = self.choose_statement() if isinstance(new_node, ast.stmt): # The source `if P: X` is added as `if P: pass` if hasattr(new_node, 'body'): new_node.body = [ast.Pass()] # type: ignore if hasattr(new_node, 'orelse'): new_node.orelse = [] # type: ignore if hasattr(new_node, 'finalbody'): new_node.finalbody = [] # type: ignore # ast.copy_location(new_node, node) return new_node
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Inserting StatementsOur next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node NODE. (If NODE is a `return` statement, then we insert the new node _before_ NODE.)If the statement to be inserted has the form```pythonif P: BODY```we only insert the "header" of the `if`, resulting in```pythonif P: NODE```Again, this applies to all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more.
class StatementMutator(StatementMutator): def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]: """Insert a random node from `source` after `node`""" new_node = self.choose_statement() if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'): # Inserting `if P: X` as `if P:` new_node.body = [node] # type: ignore if hasattr(new_node, 'orelse'): new_node.orelse = [] # type: ignore if hasattr(new_node, 'finalbody'): new_node.finalbody = [] # type: ignore # ast.copy_location(new_node, node) return new_node # Only insert before `return`, not after it if isinstance(node, ast.Return): if isinstance(new_node, ast.Return): return new_node else: return [new_node, node] return [node, new_node]
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Deleting StatementsOur last mutator is `delete()`, which deletes the current node NODE. The standard case is to replace NODE by a `pass` statement.If the statement to be deleted has the form```pythonif P: BODY```we only delete the "header" of the `if`, resulting in```pythonBODY```Again, this applies to all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more; it also selects a random branch, including `else` branches.
class StatementMutator(StatementMutator): def delete(self, node: ast.AST) -> None: """Delete `node`.""" branches = [attr for attr in ['body', 'orelse', 'finalbody'] if hasattr(node, attr) and getattr(node, attr)] if branches: # Replace `if P: S` by `S` branch = random.choice(branches) new_node = getattr(node, branch) return new_node if isinstance(node, ast.stmt): # Avoid empty bodies; make this a `pass` statement new_node = ast.Pass() ast.copy_location(new_node, node) return new_node return None # Just delete from bookutils import quiz quiz("Why are statements replaced by `pass` rather than deleted?", [ "Because `if P: pass` is valid Python, while `if P:` is not", "Because in Python, bodies for `if`, `while`, etc. cannot be empty", "Because a `pass` node makes a target for future mutations", "Because it causes the tests to pass" ], '[3 ^ n for n in range(3)]')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further. HelpersFor logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node.
class StatementMutator(StatementMutator): NODE_MAX_LENGTH = 20 def format_node(self, node: ast.AST) -> str: """Return a string representation for `node`.""" if node is None: return "None" if isinstance(node, list): return "; ".join(self.format_node(elem) for elem in node) s = RE_SPACE.sub(' ', astor.to_source(node)).strip() if len(s) > self.NODE_MAX_LENGTH - len("..."): s = s[:self.NODE_MAX_LENGTH] + "..." return repr(s)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
All TogetherLet us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation.
class StatementMutator(StatementMutator): def mutate(self, tree: ast.AST) -> ast.AST: """Mutate the given AST `tree` in place. Return mutated tree.""" assert isinstance(tree, ast.AST) tree = copy.deepcopy(tree) if not self.source: self.source = all_statements(tree) for node in ast.walk(tree): node.mutate_me = False # type: ignore node = self.node_to_be_mutated(tree) node.mutate_me = True # type: ignore self.mutations = 0 tree = self.visit(tree) if self.mutations == 0: warnings.warn("No mutations found") ast.fix_missing_locations(tree) return tree
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here are a number of transformations applied by `StatementMutator`:
mutator = StatementMutator(log=True) for i in range(10): new_tree = mutator.mutate(middle_tree())
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
This is the effect of the last mutator applied on `middle`:
print_content(astor.to_source(new_tree), '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
FitnessNow that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate. Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests.
WEIGHT_PASSING = 0.99 WEIGHT_FAILING = 0.01 def middle_fitness(tree: ast.AST) -> float: """Compute fitness of a `middle()` candidate given in `tree`""" original_middle = middle try: code = compile(tree, '<fitness>', 'exec') except ValueError: return 0 # Compilation error exec(code, globals()) passing_passed = 0 failing_passed = 0 # Test how many of the passing runs pass for x, y, z in MIDDLE_PASSING_TESTCASES: try: middle_test(x, y, z) passing_passed += 1 except AssertionError: pass passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES) # Test how many of the failing runs pass for x, y, z in MIDDLE_FAILING_TESTCASES: try: middle_test(x, y, z) failing_passed += 1 except AssertionError: pass failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES) fitness = (WEIGHT_PASSING * passing_ratio + WEIGHT_FAILING * failing_ratio) globals()['middle'] = original_middle return fitness
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones).
middle_fitness(middle_tree())
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Our "sort of fixed" version of `middle()` gets a much lower fitness:
middle_fitness(ast.parse("def middle(x, y, z): return x"))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)
from StatisticalDebugger import middle_fixed middle_fixed_source = \ inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip() middle_fitness(ast.parse(middle_fixed_source))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
PopulationWe now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012}).
POPULATION_SIZE = 40 middle_mutator = StatementMutator() MIDDLE_POPULATION = [middle_tree()] + \ [middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We sort the fix candidates according to their fitness. This actually runs all tests on all candidates.
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The candidate with the highest fitness is still our original (faulty) `middle()` code:
print(astor.to_source(MIDDLE_POPULATION[0]), middle_fitness(MIDDLE_POPULATION[0]))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:
print(astor.to_source(MIDDLE_POPULATION[-1]), middle_fitness(MIDDLE_POPULATION[-1]))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
EvolutionTo evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates.
def evolve_middle() -> None: global MIDDLE_POPULATION source = all_statements(middle_tree()) mutator = StatementMutator(source=source) n = len(MIDDLE_POPULATION) offspring: List[ast.AST] = [] while len(offspring) < n: parent = random.choice(MIDDLE_POPULATION) offspring.append(mutator.mutate(parent)) MIDDLE_POPULATION += offspring MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True) MIDDLE_POPULATION = MIDDLE_POPULATION[:n]
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
This is what happens when evolving our population for the first time; the original source is still our best candidate.
evolve_middle() tree = MIDDLE_POPULATION[0] print(astor.to_source(tree), middle_fitness(tree))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
However, nothing keeps us from evolving for a few generations more...
for i in range(50): evolve_middle() best_middle_tree = MIDDLE_POPULATION[0] fitness = middle_fitness(best_middle_tree) print(f"\rIteration {i:2}: fitness = {fitness} ", end="") if fitness >= 1.0: break
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:
print_content(astor.to_source(best_middle_tree), '.py', start_line_number=1)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
... and yes, it passes all tests:
original_middle = middle code = compile(best_middle_tree, '<string>', 'exec') exec(code, globals()) for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: middle_test(x, y, z) middle = original_middle
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix. However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements.
quiz("Some of the lines in our fix candidate are redundant. Which are these?", [ "Line 3: `if x < y`", "Line 4: `if x > z`", "Line 5: `return x`", "Line 13: `return z`" ], '[eval(chr(100 - x)) for x in [49, 50]]')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Simplifying As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements. The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
from DeltaDebugger import DeltaDebugger middle_lines = astor.to_source(best_middle_tree).strip().split('\n') def test_middle_lines(lines: List[str]) -> None: source = "\n".join(lines) tree = ast.parse(source) assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0 with DeltaDebugger() as dd: test_middle_lines(middle_lines) reduced_lines = dd.min_args()['lines'] # assert len(reduced_lines) < len(middle_lines) reduced_source = "\n".join(reduced_lines) repaired_source = astor.to_source(ast.parse(reduced_source)) # normalize print_content(repaired_source, '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:
original_source = astor.to_source(ast.parse(middle_source)) # normalize from ChangeDebugger import diff, print_patch # minor dependency for patch in diff(original_source, repaired_source): print_patch(patch)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code. CrossoverSo far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" chilren, we pick a _crossover point_ and exchange the strands at this very point:![](https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/OnePointCrossover.svg/500px-OnePointCrossover.svg.png) We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as```pythoncrossover = CrossoverOperator()crossover.crossover(tree_p1, tree_p2)```where `tree_p1` and `tree_p2` are two ASTs that are changed in place. Excursion: Implementing Crossover Crossing Statement Lists Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:
def p1(): # type: ignore a = 1 b = 2 c = 3 def p2(): # type: ignore x = 1 y = 2 z = 3
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Then a crossover operation would produce one child with a body```pythona = 1y = 2z = 3```and another child with a body```pythonx = 1b = 2c = 3``` We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`.
class CrossoverOperator: """A class for performing statement crossover of Python programs""" def __init__(self, log: bool = False): """Constructor. If `log` is set, turn on logging.""" self.log = log def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \ Tuple[List[ast.AST], List[ast.AST]]: """Crossover the statement lists `body_1` x `body_2`. Return new lists.""" assert isinstance(body_1, list) assert isinstance(body_2, list) crossover_point_1 = len(body_1) // 2 crossover_point_2 = len(body_2) // 2 return (body_1[:crossover_point_1] + body_2[crossover_point_2:], body_2[:crossover_point_2] + body_1[crossover_point_1:])
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:
tree_p1: ast.Module = ast.parse(inspect.getsource(p1)) tree_p2: ast.Module = ast.parse(inspect.getsource(p2)) body_p1 = tree_p1.body[0].body # type: ignore body_p2 = tree_p2.body[0].body # type: ignore body_p1 crosser = CrossoverOperator() tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore print_content(astor.to_source(tree_p1), '.py') print_content(astor.to_source(tree_p2), '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Applying Crossover on ProgramsApplying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we a actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program.
class CrossoverOperator(CrossoverOperator): # In modules and class defs, the ordering of elements does not matter (much) SKIP_LIST = {ast.Module, ast.ClassDef} def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool: if any(isinstance(tree, cls) for cls in self.SKIP_LIST): return False body = getattr(tree, body_attr, []) return body and len(body) >= 2
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes to ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.`) and $l_2$ (from `t2.`).If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise* If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$.* Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs.`crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise.
class CrossoverOperator(CrossoverOperator): def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool: """ Crossover the bodies `body_attr` of two trees `t1` and `t2`. Return True if successful. """ assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) assert isinstance(body_attr, str) if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None): return False if self.crossover_branches(t1, t2): return True if self.log > 1: print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}") body_1 = getattr(t1, body_attr) body_2 = getattr(t2, body_attr) # If both trees have the attribute, we can cross their bodies if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr): if self.log: print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}") new_body_1, new_body_2 = self.cross_bodies(body_1, body_2) setattr(t1, body_attr, new_body_1) setattr(t2, body_attr, new_body_2) return True # Strategy 1: Find matches in class/function of same name for child_1 in body_1: if hasattr(child_1, 'name'): for child_2 in body_2: if (hasattr(child_2, 'name') and child_1.name == child_2.name): if self.crossover_attr(child_1, child_2, body_attr): return True # Strategy 2: Find matches anywhere for child_1 in random.sample(body_1, len(body_1)): for child_2 in random.sample(body_2, len(body_2)): if self.crossover_attr(child_1, child_2, body_attr): return True return False
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We have a special case for `if` nodes, where we can cross their body and `else` branches.
class CrossoverOperator(CrossoverOperator): def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool: """Special case: `t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'` becomes `t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1` Returns True if successful. """ assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and hasattr(t2, 'body') and hasattr(t2, 'orelse')): t1 = cast(ast.If, t1) # keep mypy happy t2 = cast(ast.If, t2) if self.log: print(f"Crossing branches {t1} x {t2}") t1.body, t1.orelse, t2.body, t2.orelse = \ t2.orelse, t2.body, t1.orelse, t1.body return True return False
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful.
class CrossoverOperator(CrossoverOperator): def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]: """Do a crossover of ASTs `t1` and `t2`. Raises `CrossoverError` if no crossover is found.""" assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) for body_attr in ['body', 'orelse', 'finalbody']: if self.crossover_attr(t1, t2, body_attr): return t1, t2 raise CrossoverError("No crossover found") class CrossoverError(ValueError): pass
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
End of Excursion Crossover in Action Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:
def p1(): # type: ignore if True: print(1) print(2) print(3) def p2(): # type: ignore if True: print(a) print(b) else: print(c) print(d)
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We invoke the `crossover()` method with two ASTs from `p1` and `p2`:
crossover = CrossoverOperator() tree_p1 = ast.parse(inspect.getsource(p1)) tree_p2 = ast.parse(inspect.getsource(p2)) crossover.crossover(tree_p1, tree_p2);
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here is the crossed offspring, mixing statement lists of `p1` and `p2`:
print_content(astor.to_source(tree_p1), '.py') print_content(astor.to_source(tree_p2), '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`.
middle_t1, middle_t2 = crossover.crossover(middle_tree(), ast.parse(inspect.getsource(p2)))
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
We see how the resulting offspring encompasses elements of both sources:
print_content(astor.to_source(middle_t1), '.py') print_content(astor.to_source(middle_t2), '.py')
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
A Repairer ClassSo far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate:```pythondebugger = OchiaiDebugger()with debugger: with debugger: ...repairer = Repairer(debugger)repairer.repair()``` Excursion: Implementing Repairer The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below.
from StackInspector import StackInspector # minor dependency class Repairer(StackInspector): """A class for automatic repair of Python programs""" def __init__(self, debugger: RankingDebugger, *, targets: Optional[List[Any]] = None, sources: Optional[List[Any]] = None, log: Union[bool, int] = False, mutator_class: Type = StatementMutator, crossover_class: Type = CrossoverOperator, reducer_class: Type = DeltaDebugger, globals: Optional[Dict[str, Any]] = None): """Constructor. `debugger`: a `RankingDebugger` to take tests and coverage from. `targets`: a list of functions/modules to be repaired. (default: the covered functions in `debugger`, except tests) `sources`: a list of functions/modules to take repairs from. (default: same as `targets`) `globals`: if given, a `globals()` dict for executing targets (default: `globals()` of caller)""" assert isinstance(debugger, RankingDebugger) self.debugger = debugger self.log = log if targets is None: targets = self.default_functions() if not targets: raise ValueError("No targets to repair") if sources is None: sources = self.default_functions() if not sources: raise ValueError("No sources to take repairs from") if self.debugger.function() is None: raise ValueError("Multiple entry points observed") self.target_tree: ast.AST = self.parse(targets) self.source_tree: ast.AST = self.parse(sources) self.log_tree("Target code to be repaired:", self.target_tree) if ast.dump(self.target_tree) != ast.dump(self.source_tree): self.log_tree("Source code to take repairs from:", self.source_tree) self.fitness_cache: Dict[str, float] = {} self.mutator: StatementMutator = \ mutator_class( source=all_statements(self.source_tree), suspiciousness_func=self.debugger.suspiciousness, log=(self.log >= 3)) self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3)) self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3)) if globals is None: globals = self.caller_globals() # see below self.globals = globals
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
When we access or execute functions, we ault_functionso \todo{What? -- BM} so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`. Helper FunctionsThe constructor uses a number of helper functions to create its environment.
class Repairer(Repairer): def getsource(self, item: Union[str, Any]) -> str: """Get the source for `item`. Can also be a string.""" if isinstance(item, str): item = self.globals[item] return inspect.getsource(item) class Repairer(Repairer): def default_functions(self) -> List[Callable]: """Return the set of functions to be repaired. Functions whose names start or end in `test` are excluded.""" def is_test(name: str) -> bool: return name.startswith('test') or name.endswith('test') return [func for func in self.debugger.covered_functions() if not is_test(func.__name__)] class Repairer(Repairer): def log_tree(self, description: str, tree: Any) -> None: """Print out `tree` as source code prefixed by `description`.""" if self.log: print(description) print_content(astor.to_source(tree), '.py') print() print() class Repairer(Repairer): def parse(self, items: List[Any]) -> ast.AST: """Read in a list of items into a single tree""" tree = ast.parse("") for item in items: if isinstance(item, str): item = self.globals[item] item_lines, item_first_lineno = inspect.getsourcelines(item) try: item_tree = ast.parse("".join(item_lines)) except IndentationError: # inner function or likewise warnings.warn(f"Can't parse {item.__name__}") continue ast.increment_lineno(item_tree, item_first_lineno - 1) tree.body += item_tree.body return tree
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Running TestsNow that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected.
class Repairer(Repairer): def run_test_set(self, test_set: str, validate: bool = False) -> int: """ Run given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`). If `validate` is set, check expectations. Return number of passed tests. """ passed = 0 collectors = self.debugger.collectors[test_set] function = self.debugger.function() assert function is not None # FIXME: function may have been redefined for c in collectors: if self.log >= 4: print(f"Testing {c.id()}...", end="") try: function(**c.args()) except Exception as err: if self.log >= 4: print(f"failed ({err.__class__.__name__})") if validate and test_set == self.debugger.PASS: raise err.__class__( f"{c.id()} should have passed, but failed") continue passed += 1 if self.log >= 4: print("passed") if validate and test_set == self.debugger.FAIL: raise FailureNotReproducedError( f"{c.id()} should have failed, but passed") return passed class FailureNotReproducedError(ValueError): pass
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
Here is how we use `run_tests_set()`:
repairer = Repairer(middle_debugger) assert repairer.run_test_set(middle_debugger.PASS) == \ len(MIDDLE_PASSING_TESTCASES) assert repairer.run_test_set(middle_debugger.FAIL) == 0
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The method `run_tests()` runs passing and failing tests, weighing the passed testcases to obtain the overall fitness.
class Repairer(Repairer): def weight(self, test_set: str) -> float: """ Return the weight of `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`). """ return { self.debugger.PASS: WEIGHT_PASSING, self.debugger.FAIL: WEIGHT_FAILING }[test_set] def run_tests(self, validate: bool = False) -> float: """Run passing and failing tests, returning weighted fitness.""" fitness = 0.0 for test_set in [self.debugger.PASS, self.debugger.FAIL]: passed = self.run_test_set(test_set, validate=validate) ratio = passed / len(self.debugger.collectors[test_set]) fitness += self.weight(test_set) * ratio return fitness
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
The method `validate()` ensures the observed tests can be adequately reproduced.
class Repairer(Repairer): def validate(self) -> None: fitness = self.run_tests(validate=True) assert fitness == self.weight(self.debugger.PASS) repairer = Repairer(middle_debugger) repairer.validate()
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook
(Re)defining FunctionsOur `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness.
class Repairer(Repairer): def fitness(self, tree: ast.AST) -> float: """Test `tree`, returning its fitness""" key = cast(str, ast.dump(tree)) if key in self.fitness_cache: return self.fitness_cache[key] # Save defs original_defs: Dict[str, Any] = {} for name in self.toplevel_defs(tree): if name in self.globals: original_defs[name] = self.globals[name] else: warnings.warn(f"Couldn't find definition of {repr(name)}") assert original_defs, f"Couldn't find any definition" if self.log >= 3: print("Repair candidate:") print_content(astor.to_source(tree), '.py') print() # Create new definition try: code = compile(tree, '<Repairer>', 'exec') except ValueError: # Compilation error code = None if code is None: if self.log >= 3: print(f"Fitness = 0.0 (compilation error)") fitness = 0.0 return fitness # Execute new code, defining new functions in `self.globals` exec(code, self.globals) # Set new definitions in the namespace (`__globals__`) # of the function we will be calling. function = self.debugger.function() assert function is not None assert hasattr(function, '__globals__') for name in original_defs: function.__globals__[name] = self.globals[name] # type: ignore fitness = self.run_tests(validate=False) # Restore definitions for name in original_defs: function.__globals__[name] = original_defs[name] # type: ignore self.globals[name] = original_defs[name] if self.log >= 3: print(f"Fitness = {fitness}") self.fitness_cache[key] = fitness return fitness
_____no_output_____
MIT
notebooks/Repairer.ipynb
bjrnmath/debuggingbook