path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
jupyter_english/projects_indiv/Pubg_finish_placement_prediction.ipynb | ###Markdown
PUBG Finish Placement Prediction  1. Feature and data explanation At first, tell something about the game. **PlayerUnknown's Battlegrounds (PUBG)** - an online multiplayer battle royale game. Up to 100 players are dropped onto an island empty-handed and must explore, scavenge, loot and eliminate other players until only one is left standing, all while the play zone continues to shrink. Battle Royale-style video games have taken the world by storm. So PUBG becomes very popular. With over 50 million copies sold, it's the fifth best selling game of all time, and has millions of active monthly players. **The task**: using player statistic during the match, predict final placement of this player, where 0 is last place and 1 is winner winner, chicken dinner. Dataset contains over 65,000 games' worth of anonymized player data, which you can download from [kaggle](https://www.kaggle.com/c/pubg-finish-placement-prediction/data) website. Each row of data is player stats at the end of the match. The data comes from matches of all types: solos, duos, squads, and custom; there is no guarantee of there being 100 players per match, nor at most 4 player per group. Statistics can be like - player kills, his/her match, group and personal ID, amount walked distance and etc... **WinPlacePerc** - is a target feature on a scale from 1 (first place) to 0 (last place) - percentile winning placement. A solution of the task can be valuable for PUBG players, for understanding, what parameters are important, which tactic to choose. Also using [PUBG Developer API](https://developer.pubg.com/) we can collect our own data with more features. So it makes real to create a lot of different apps, which will help players. For example, app with personal assisstant, who will give a tips, what skill you should to train . Let's look to the data 2-3 Primary data analysis and visual data analysis
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
import scipy.stats as sc
import gc
import warnings
plt.rcParams['figure.figsize'] = 15,8
sns.set(rc={'figure.figsize':(15,8)})
pd.options.display.float_format = '{:.2f}'.format
warnings.filterwarnings('ignore')
gc.enable()
train = pd.read_csv('../input/train_V2.csv')
test = pd.read_csv('../input/test_V2.csv')
train.head()
###Output
_____no_output_____
###Markdown
Data fields* **DBNOs** - Number of enemy players knocked.* **assists** - Number of enemy players this player damaged that were killed by teammates.* **boosts** - Number of boost items used.* **damageDealt** - Total damage dealt. Note: Self inflicted damage is subtracted.* **headshotKills** - Number of enemy players killed with headshots.* **heals** - Number of healing items used.* **Id** - Player’s Id* **killPlace** - Ranking in match of number of enemy players killed.* **killPoints** - Kills-based external ranking of player. (Think of this as an Elo ranking where only kills matter.) If there is * a value other than -1 in rankPoints, then any 0 in killPoints should be treated as a “None”.* **killStreaks** - Max number of enemy players killed in a short amount of time.* **kills** - Number of enemy players killed.* **longestKill** - Longest distance between player and player killed at time of death. This may be misleading, as downing a player and driving away may lead to a large longestKill stat.* **matchDuration** - Duration of match in seconds.* **matchId** - ID to identify match. There are no matches that are in both the training and testing set.* **matchType** - String identifying the game mode that the data comes from. The standard modes are “solo”, “duo”, “squad”, “solo-fpp”, “duo-fpp”, and “squad-fpp”; other modes are from events or custom matches.* **rankPoints** - Elo-like ranking of player. This ranking is inconsistent and is being deprecated in the API’s next version, so use with caution. Value of -1 takes place of “None”.* **revives** - Number of times this player revived teammates.* **rideDistance** - Total distance traveled in vehicles measured in meters.* **roadKills** - Number of kills while in a vehicle.* **swimDistance** - Total distance traveled by swimming measured in meters.* **teamKills** - Number of times this player killed a teammate.* **vehicleDestroys** - Number of vehicles destroyed.* **walkDistance** - Total distance traveled on foot measured in meters.* **weaponsAcquired** - Number of weapons picked up.* **winPoints** - Win-based external ranking of player. (Think of this as an Elo ranking where only winning matters.) If there is a value other than -1 in rankPoints, then any 0 in winPoints should be treated as a “None”.* **groupId** - ID to identify a group within a match. If the same group of players plays in different matches, they will have a different groupId each time.* **numGroups** - Number of groups we have data for in the match.* **maxPlace** - Worst placement we have data for in the match. This may not match with numGroups, as sometimes the data skips over placements.* **winPlacePerc** - The target of prediction. This is a percentile winning placement, where 1 corresponds to 1st place, and 0 corresponds to last place in the match. It is calculated off of maxPlace, not numGroups, so it is possible to have missing chunks in a match
###Code
train.info()
###Output
_____no_output_____
###Markdown
We have 4.5 millions of player stats records! Now check dataset for missing values
###Code
display(train[train.isnull().any(1)])
display(test[test.isnull().any(1)])
###Output
_____no_output_____
###Markdown
There are only one row with nan value, so let's drop it
###Code
train.drop(2744604, inplace=True)
###Output
_____no_output_____
###Markdown
General info about aech column
###Code
train.describe()
###Output
_____no_output_____
###Markdown
We can already guess, that the target feature has uniform distribution. It's because winPlacePerc is already scaled feature and after every match player can have only one place.
###Code
train['winPlacePerc'].hist(bins=25);
###Output
_____no_output_____
###Markdown
We can notice, that 0 and values are more than others. It's because first and last place exists in every match) WinPlacePerc has obviously uniform distribution, but let's check target feature for normality and skewness of distribution (becouse of task)
###Code
print(sc.normaltest(train['winPlacePerc']))
print('Skew: ', sc.skew(train['winPlacePerc']))
###Output
_____no_output_____
###Markdown
Pvalue is zero, so this distribution is not normal Skew is close to zero, so distribution is almostly symmetric Now look at distrubution of features with upper limit (to get rid of outliers) and without zero values (because of lots of zero values)Also make boxplots to see correlation target feature from feature values
###Code
def featStat(featureName, constrain,plotType):
feat = train[featureName][train[featureName]>0]
data = train[[featureName,'winPlacePerc']].copy()
q99 = int(data[featureName].quantile(0.99))
plt.rcParams['figure.figsize'] = 15,5;
if constrain!=None:
feat = feat[feat<constrain]
if plotType == 'hist':
plt.subplot(1,2,1)
feat.hist(bins=50);
plt.title(featureName);
n = 20
cut_range = np.linspace(0,q99,n)
cut_range = np.append(cut_range, data[featureName].max())
data[featureName] = pd.cut(data[featureName],
cut_range,
labels=["{:.0f}-{:.0f}".format(a_, b_) for a_, b_ in zip(cut_range[:n], cut_range[1:])],
include_lowest=True
)
ax = plt.subplot(1,2,2)
sns.boxplot(x="winPlacePerc", y=featureName, data=data, ax=ax, color="#2196F3")
ax.set_xlabel('winPlacePerc', size=14, color="#263238")
ax.set_ylabel(featureName, size=14, color="#263238")
plt.gca().xaxis.grid(True)
plt.tight_layout()
if plotType == 'count':
plt.subplot(1,2,1)
sns.countplot(feat, color="#2196F3");
plt.subplot(1,2,2)
data.loc[data[featureName] > q99, featureName] = q99+1
x_order = data.groupby(featureName).mean().reset_index()[featureName]
x_order.iloc[-1] = str(q99+1)+"+"
data[featureName][data[featureName] == q99+1] = str(q99+1)+"+"
ax = sns.boxplot(x=featureName, y='winPlacePerc', data=data, color="#2196F3", order = x_order);
ax.set_xlabel(featureName, size=14, color="#263238")
ax.set_ylabel('WinPlacePerc', size=14, color="#263238")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Kills and damage**
###Code
featStat('kills',15,'count');
plt.show();
featStat('longestKill',400,'hist');
plt.show();
featStat('damageDealt',1000,'hist');
###Output
_____no_output_____
###Markdown
**Heals and boosts**
###Code
featStat('heals',20,'count')
plt.show()
featStat('boosts',12,'count')
###Output
_____no_output_____
###Markdown
**Distance**
###Code
featStat('walkDistance',5000,'hist')
plt.show()
featStat('swimDistance',500,'hist')
plt.show()
featStat('rideDistance',12000,'hist')
###Output
_____no_output_____
###Markdown
**Some other features**
###Code
featStat('weaponsAcquired',15,'count')
plt.show()
featStat('vehicleDestroys',None,'count')
features = ['kills', 'longestKill', 'damageDealt', 'heals', 'boosts', 'walkDistance', 'swimDistance', 'rideDistance', 'weaponsAcquired', 'vehicleDestroys']
zeroPerc = ((train[features] == 0).sum(0) / len(train)*100).sort_values(ascending = False)
sns.barplot(x=zeroPerc.index , y=zeroPerc, color="#2196F3");
plt.title("Percentage of zero values")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
As we can see, with increasing of value of this features, probability to win also increase. So features, described above, good correlate with target feature. Plot remaining features
###Code
df = train.drop(columns=['Id','matchId','groupId','matchType']+features)
df[(df>0) & (df<=df.quantile(0.99))].hist(bins=25,layout=(5,5),figsize=(15,15));
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Feature correlations
###Code
f,ax = plt.subplots(figsize=(15, 13))
sns.heatmap(df.corr(), annot=True, fmt= '.1f',ax=ax,cbar=False)
plt.show()
###Output
_____no_output_____
###Markdown
Take features, which most correlate with target feature
###Code
f,ax = plt.subplots(figsize=(11, 11))
cols = abs(train.corr()).nlargest(6, 'winPlacePerc')['winPlacePerc'].index
hm = sns.heatmap(np.corrcoef(train[cols].values.T), annot=True, square=True, fmt='.2f', yticklabels=cols.values, xticklabels=cols.values)
print(", ".join(cols[1:]), " most correlate with target feature")
plt.show()
###Output
_____no_output_____
###Markdown
Let's make pairplots. We can clearly see correlation with winPlacePerc (but maybe only with weaponsAcquired it's difficult to see)
###Code
sns.set(font_scale=2)
sns.pairplot(train, y_vars=["winPlacePerc"], x_vars=cols[1:],height=8);
sns.set(font_scale=1)
###Output
_____no_output_____
###Markdown
Match statistics
###Code
print("Number of match in train dataset:",train['matchId'].nunique())
playersJoined = train.groupby('matchId')['matchId'].transform('count')
sns.countplot(playersJoined[playersJoined>=75])
plt.title('playersJoined');
ngroupsByMatch = train.groupby('matchId')['groupId'].nunique()
ax = sns.countplot(ngroupsByMatch)
plt.title('Number of groups by match');
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%d'))
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5)) #Starts from 0 not from 1:(
train.matchDuration.hist(bins=50);
###Output
_____no_output_____
###Markdown
We can see 3 peaks on second plot and 2 peaks in match duration plot. Presumably, it depends from match type. **Some stats by matchType**
###Code
plt.rcParams['figure.figsize'] = 18,7;
types = train.groupby('matchType').size().sort_values(ascending=False)
sns.barplot(x=types.index,y=types.values);
plt.title("Number of players by matchType");
plt.tight_layout()
###Output
_____no_output_____
###Markdown
So, people usually play in squads or pairs. (or maybe just data collected in this way) At the end, some numbers, which describe each type of game by number of players, groups, matches and etc. In this table np.size - number of players and _num - number of matches. We can see that maxPlace and numGroups are almostly the same.
###Code
def _min(x):
return x.value_counts().values.min()
def _max(x):
return x.value_counts().values.max()
def _avg(x):
return x.value_counts().values.mean()
def _med(x):
return np.median(x.value_counts().values)
def _num(x):
return x.nunique()
infostat = train.groupby('matchType').agg({
"matchId": [np.size, _num, _min,_med,_max], #np.size - number of players, _num - number of matches
"groupId": [_min,_med,_max],
"matchDuration": [min,np.median, max],
"maxPlace": [min,np.median,max],
"numGroups":[min,np.median,max]
}).sort_values(by=('matchId','size'),ascending=False)
display(infostat)
###Output
_____no_output_____
###Markdown
4. Insights and found dependencies We found, that walkDistance, killPlace, boosts, weaponsAcquired and damageDealt - is most correlated features. It easy to guess why. If you are close to the top, more likely, that you walk greater distance, because you have to be in the circle (game zone). More likely, that you find a good weapon and/or kill somebody. If you kill somebody, your enemy can hurt you, so then it's better to use boost. Near each killed enemy, you can find his/her loot and ,probably, you acquire some his/her weapons. Also we can see, that a lot of people play in squads or duos (play in groups). Players in one team have the same finish placement. Finish result depends from team work. So it's better to see general statistic by team, not by separate player. *Game zones* 5. Metrics selection This task is regression problem. For regression problem we know only `mean absolute error` (MAE), `mean squared error` (MSE; also root MSE exists ) and `mean squared log error` (MSLE; root MSLE exists too). Our target have uniform distribution with range from 0 to 1. But MSLE more appropriate for not uniform distribution and MSE usually use, when large errors are particularly undesirable. We are not in this situations, so **MAE** will be convenient for us.  6. Model selection I decided to use **LightGBM**. With LightGBM convenient to work with large datasets (our case). LightGBM more faster than, for example, XGBoost and give good score at the same time. There are lots of parameters to tune (main problem) and it supports solving regression problems.
###Code
import lightgbm as lgb
###Output
_____no_output_____
###Markdown
7. Data preprocessing As I already mentioned, we are going to group player statistics to teams (by groupId).
###Code
# Function, which reduce memory usage.
# This function I took from ready kernel (https://www.kaggle.com/gemartin/load-data-reduce-memory-usage)
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
###Output
_____no_output_____
###Markdown
In next steps we will create new features. That's why this step will repeat again. So, at this stage, make simple data preparation. We just group all by team and than make a ranking in each match. > Ranking - scaling, where the lowest value in initial table replace to value about zero (depends from distribution; no lower than 0) and maximum value replace to value about 1 (no higher than 1).
###Code
def initial_preparing(df, Debug):
if Debug:
df = df[df['matchId'].isin(df['matchId'].unique()[:2000])]
# Drop next columns. *Points features don't correlate with target feature, need
# more EDA to understand how they work.
df.drop(columns=['killPoints','rankPoints','winPoints','matchType','maxPlace','Id'],inplace=True)
X = df.groupby(['matchId','groupId']).agg(np.mean)
X = reduce_mem_usage(X)
y = X['winPlacePerc']
X.drop(columns=['winPlacePerc'],inplace=True)
X_ranked = X.groupby('matchId').rank(pct=True)
X = X.reset_index()[['matchId','groupId']].merge(X_ranked, how='left', on=['matchId', 'groupId'] )
X.drop(['matchId','groupId'],axis=1, inplace=True)
X = reduce_mem_usage(X)
return X, y
X_train, y = initial_preparing(train.copy(),False)
###Output
_____no_output_____
###Markdown
Split our train dataset to part, which we are going to train (X_train; same name), and to part with which we will check an error.
###Code
from sklearn.model_selection import train_test_split
X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y, test_size=0.2, random_state=666)
###Output
_____no_output_____
###Markdown
8-9. Cross-validation and adjustment of model hyperparameters. Creation of new features and description of this process
###Code
from sklearn.model_selection import cross_val_score
import sklearn.metrics
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
I choose 5 folds in cross-validation. We have a big dataset, so it's not necessary to set more folds, 80% for train part is enough to train model. Moreover, If I choose higher number, than it will take a lot of time to compute.
###Code
%%time
lgtrain = lgb.Dataset(X_train, label=y_train.reset_index(drop=True))
res = lgb.cv({'metric': 'mae'},lgtrain, nfold=5,stratified=False,seed=666)
print("Mean score:",res['l1-mean'][-1])
gc.collect()
###Output
_____no_output_____
###Markdown
So, our score is 0.0644. It's not bad. It means that our model error is +-6.42 placements (if there are 100 players on server) Lets add new features and make ranking again. When we aggregate dataset by groupId, we create "new" features, i.e. we can aggregate by different ways. For example, 'boosts':sum - total number of using boosts in one team.
###Code
team_features = {
'assists': [sum, np.mean, np.size], #np.size - size of team
'boosts' : [sum, np.var, np.mean],
'heals': [sum, np.var, np.mean],
'damageDealt': [np.var,min,max,np.mean],
'DBNOs': [np.var,max,np.mean],
'headshotKills': [max,np.mean],
'killPlaceScall':[sum, min,max, np.var, np.mean],
'kills': [ sum, max, np.var,np.mean],
'killStreaks': [max,np.var,np.mean],
'longestKill': [max, np.mean, np.var],
'revives': sum,
'rideDistance': [sum, np.mean,np.var],
'swimDistance': [np.var],
'teamKills': sum,
'vehicleDestroys': sum,
'walkDistance': [np.var,np.mean],
'weaponsAcquired': [np.mean],
'damageRate': [np.var,min,max,np.mean],
'headshotRate': [np.var,max,np.mean],
'killStreakRate': [np.var,np.max, np.mean],
'healthItems': [np.var, np.mean],
'healsOnKill': [ np.var, np.mean],
'sniper': [ np.var, np.mean],
'totalDistance': [sum, np.var, np.mean],
'totalDistancePen': [ sum ,np.var, np.mean],
'killsPerRoadDist': [np.mean],
'killsPerWalkDist': [np.mean],
'killsPerDist': [np.mean],
'distance_over_weapons': [np.mean],
'walkDistance_over_heals': [np.mean],
'skill': [np.var,np.mean]
}
###Output
_____no_output_____
###Markdown
**New features** `killPlaceScall` - scaled `killPlace` feature. Just divide `killPlace` on number of players in a match. `damageRate` - ratio `kills` and `damageDealt/100`. If `damageRate`>1, player killed enemies, who was already damaged. So it was more easies to kill them.If this feature <1, it means that player deal more damage than he/she kill - player had a difficult battle or just a little damage some players, whose he/she don't kill. `headshotRate` - percentage of headshot kills. Shows skill of player `killStreakRate` - percentage of killStreak from all kills. Also shows player skill `healthItems` - total number of health items (heals+boosts). `healsOnKill` - equal to `healsItems`/`kills`. It shows how good player was in a battle. If player don't use heals after kill, it probably means, that he/she don't take damage. `sniper` - equal to `longestKill`/100*`weaponsAcquired`. It shows player sniper skill. Usually snipers have a good weapon. To find this weapon, player more likeky need acquired a lot other weapons. Yea, it's strange feature. `totalDistance` - `rideDistance` + `walkDistance` + `swimDistance`. Big distance means that player survived for long period of time, so he/she will take a good final place. `totalDistancePen` - penalized `totalDistance`. It's needed to predict time of player game . So vehicle speed is approximately 5 times higher than player walk speed and swim speed is approximately 10 times lower than player walk speed. `killsPerRoadDist` - kills per distance. This feature can show your skill too. It's difficult to kill enemy using vehicle. `killsPerWalkDist` - represent player style. It shows you are camper or always in moving. `killsPerDist` - just combination of `killsPerRoadDist` and `killsPerWalkDist` `distance_over_weapons` - low values can represent that player try to find loot by yourself and/or he/she don't satisfied his/her equipment, high values can mean that player just take loot from killed people and/or he/she has good equipment. Of course, it's not always true. `walkDistance_over_heals` - may represent player battles per distance. `skill` - equal to `headshotKills` + `roadKills` + `teamKills`. Just one of the indicator of player skill.
###Code
def featuring(df, isTrain, Debug):
y=None
if Debug:
df = df[df['matchId'].isin(df['matchId'].unique()[:2000])]
#Creating new features
#_________________________________________________________________________________________
nplayers = df.groupby('matchId')['matchId'].transform('count')
df['killPlaceScall'] = df['killPlace'] / nplayers
df['damageRate'] = df['kills']/(0.01*df['damageDealt'])
df['headshotRate'] = df['headshotKills']/df['kills']
df['killStreakRate'] = df['killStreaks']/df['kills']
df['healthItems'] = df['heals'] + df['boosts']
df['healsOnKill'] = df['healthItems']/df['kills']
df['sniper'] = df['longestKill']/100*df['weaponsAcquired']
df['totalDistance'] = df['rideDistance'] + df["walkDistance"] + df["swimDistance"]
df['totalDistancePen'] = df['rideDistance']/5 + df["walkDistance"] + df["swimDistance"]*10
df['killsPerRoadDist'] = df['roadKills'] / (df['rideDistance']+1)
df['killsPerWalkDist'] = (df['kills']-df['roadKills']) / (df['walkDistance']+1)
df['killsPerDist'] = df['kills']/(df['totalDistance']+1)
df['distance_over_weapons'] = df['totalDistance'] / df['weaponsAcquired']
df['walkDistance_over_heals'] = df['walkDistance']/100/df['heals']
df["skill"] = df["headshotKills"] + df["roadKills"] - df['teamKills']
df.fillna(0,inplace=True)
df.replace(np.inf, 0, inplace=True)
#_________________________________________________________________________________________
ids = df[['matchId','groupId','Id']]
df.drop(columns=['killPlace','killPoints','rankPoints','winPoints','matchType','maxPlace','Id'],inplace=True)
tfeatures = team_features.copy()
if isTrain:
tfeatures['winPlacePerc'] = max
X = df.groupby(['matchId','groupId']).agg(tfeatures)
X.fillna(0,inplace=True)
X.replace(np.inf, 1000000, inplace=True)
X = reduce_mem_usage(X)
if isTrain:
y = X[('winPlacePerc','max')]
X.drop(columns=[('winPlacePerc','max')],inplace=True)
#Group dataset by matches. To each match apply ranking
X_ranked = X.groupby('matchId').rank(pct=True)
X = X.reset_index()[['matchId','groupId']].merge(X_ranked, suffixes=["", "_rank"], how='left', on=['matchId', 'groupId'] )
ids_after = X[['matchId','groupId']]
ids_after.columns = ['matchId','groupId']
X = X.drop(['matchId','groupId'],axis=1)
X.columns = [a+"_"+b for a,b in X.columns]
X = reduce_mem_usage(X)
return X, y, ids,ids_after
%%time
X_train, y, _,_ = featuring(train,True,False)
X_test, _,ids_init,ids_after = featuring(test,False,False)
###Output
_____no_output_____
###Markdown
Split our train dataset again
###Code
X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y, test_size=0.2, random_state=666)
%%time
lgtrain = lgb.Dataset(X_train, label=y_train.reset_index(drop=True))
res = lgb.cv({'metric': 'mae'},lgtrain, nfold=5,stratified=False,seed=666)
print("Mean score:",res['l1-mean'][-1])
###Output
_____no_output_____
###Markdown
We get a significant improvement (almost in 2 times). So new features really help to understand the data. Now let's tune LightGBM. To do this, we are going to use GridSearchCV, which helps find the best parameters.
###Code
gridParams = {
'num_leaves': [30,50,100], 'max_depth': [-1,8,15],
'min_data_in_leaf': [100,300,500], 'max_bin': [250,500],
'lambda_l1': [0.01], 'num_iterations': [5],
'nthread': [4], 'seed': [666],
'learning_rate': [0.05], 'metric': ['mae'],
"bagging_fraction" : [0.7], "bagging_seed" : [0], "colsample_bytree" : [0.7]
}
model = lgb.LGBMRegressor()
grid = GridSearchCV(model, gridParams,
verbose=1,
cv=5)
###Output
_____no_output_____
###Markdown
We are going to tune `num_leaves`, `max_depth`, `min_data_in_leaf` and `max_bin`, because it's main parameters in LightGBM. `num_leaves` - max number of leaves in one tree. It's the main parameter to control the complexity of the tree model. `max_depth` - limit the max depth for tree model. This is used to deal with over-fitting. -1 means no limit `min_data_in_leaf` - minimal number of data in one leaf. This is a very important parameter to prevent over-fitting in a leaf-wise tree. Its optimal value depends on the number of training samples and `num_leaves`. `max_bin` - max number of bins that feature values will be bucketed in. Small number of bins may reduce training accuracy but may increase general power (deal with over-fitting) There we take only 500 000 teams out of 1 500 000. As we will see further (on learning curve), it's enough to find best params.
###Code
%%time
grid.fit(X_train.iloc[:500000,:], y_train.iloc[:500000])
print("Best params:", grid.best_params_)
print("\nBest score:", grid.best_score_)
params = grid.best_params_
###Output
_____no_output_____
###Markdown
Best score is worse than after cross-validation, because there was taken 5 iterations, and in cross-validation - 100 iterations. But it's will be OK, when we set higher number of iterations with parameters, which we find now. 10. Plotting training and validation curves Now let's plot learning curve with different sizes of trainsets.
###Code
from sklearn.model_selection import validation_curve
from sklearn.model_selection import learning_curve
model = lgb.LGBMRegressor(learning_rate=0.05,nthread=4)
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
def plot_learning_curve():
train_sizes = [1000,5000,10000,50000,100000,500000]
N_train, val_train, val_test = learning_curve(model,
X_train, y_train, train_sizes=train_sizes, cv=5,
scoring='neg_mean_absolute_error')
plot_with_err(N_train, abs(val_train), label='training scores')
plot_with_err(N_train, abs(val_test), label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('MAE')
plt.legend()
plot_learning_curve()
###Output
_____no_output_____
###Markdown
As we can see, at small sizes of trainset, we have a big difference in train and validation scores. The reason is overfitting of train set and lack of data. But with increasing size, this curves converge. With 500 000 train size this difference is very small. That's why I took 500 000 instead of all trainset in GridSearchCV. Now look how score depends from number of iterations.
###Code
def iter_vs_score(num_iterations):
val_train, val_test = validation_curve(model, X_train[:500000], y_train[:500000],
'num_iterations', num_iterations, cv=4,scoring='neg_mean_absolute_error', verbose=1)
plot_with_err(num_iterations, abs(val_test), label='validation scores')
plot_with_err(num_iterations, abs(val_train), label='training scores')
plt.xlabel('Number of iterations'); plt.ylabel('MAE')
plt.legend();
plt.show();
num_iterations_small = [5,10,20,30,100,200]
iter_vs_score(num_iterations_small)
num_iterations_big = [500,1000,5000,10000]
iter_vs_score(num_iterations_big)
###Output
_____no_output_____
###Markdown
For small number of iterations, error fall down quickly. For large iterations error goes down, but slowly. Also we can notice, that validation and training scores are approximetly the same for small number of iterations. For big number of iterations we can see, that score for training set continue to go down, for validation set it goes down too, but much more slower. So curves diverge, but there are no overfitting, because validation score continue go down. 11. Prediction for test and hold-out samples Let's train LightGBM model with params, which we had found with GridSearchCV. In the same time we will compute error on hold-out set every 1000 iterations. Total number of iterations is 5000, it's should me enough. If we take higher number of iterations, we won't get significant improvement or can even get overfitting.
###Code
%%time
lgtrain = lgb.Dataset(X_train, label=y_train)
lgval = lgb.Dataset(X_holdout, label=y_holdout)
params['num_iterations'] = 5000
model = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], early_stopping_rounds=200, verbose_eval=1000)
###Output
_____no_output_____
###Markdown
We get 0.0291 for holdout set and 0.0242 for train set. It's obviously better than our previous scores. Now make a prediction for test set and put results to file `submission.csv`
###Code
pred_test = model.predict(X_test, num_iteration=model.best_iteration)
ids_after['winPlacePerc'] = pred_test
predict = ids_init.merge(ids_after, how='left', on=['groupId',"matchId"])['winPlacePerc']
df_sub = pd.read_csv("../input/sample_submission_V2.csv")
df_test = pd.read_csv("../input/test_V2.csv")
df_sub['winPlacePerc'] = predict
df_sub[["Id", "winPlacePerc"]].to_csv("submission.csv", index=False)
###Output
_____no_output_____ |
experiments/tl_1/wisig-oracle.run1.framed/trials/2/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1_wisig-oracle.run1",
"device": "cuda",
"lr": 0.001,
"seed": 1337,
"dataset_seed": 1337,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": 100,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "Wisig_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1",
},
],
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
.ipynb_checkpoints/unbalanced_GW-checkpoint.ipynb | ###Markdown
Unbalanced Gromov-Wasserstein for SCOTSCOT using the balanced case of Gromov-Wasserstein is sensitive to mass imbalance. By making some changes, we can get SCOT to run with using **Unbalanced Gromov-Wasserstein** instead [Sejourne 2020]. This fork of SCOT depends on Thibault Sejourne's PyTorch implementation of the entropic unbalanced GW solver [here](https://github.com/thibsej/unbalanced_gromov_wasserstein)We demonstrate this fact in this notebook by considering the problem of aligning two datasets generated by sampling from a wishbone-shaped structure with 3 branches. Each dataset is sampled from the structure with different density profiles, such that there some branches are underrepresented/overrepresented in each dataset. References[Sejourne 2020] Séjourné, T., Vialard, F.X. and Peyré, G., 2020. The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation. arXiv preprint arXiv:2009.04266.
###Code
import numpy as np
import matplotlib as mp
import matplotlib.pyplot as plt
# generate example data
def f(t, p = 0.5, t0 = 0.3):
if t < t0:
x = np.array([t, 0, 0])
x[0:2] += np.random.randn(2)*0.05
else:
if np.random.rand() < p:
x= np.array([t, np.tanh(t-t0), 1])
x[0:2] += np.random.randn(2)*0.05
else:
x= np.array([t, -np.tanh(t-t0), 2])
x[0:2] += np.random.randn(2)*0.025
return x
def h1(x):
return np.array([x[0]*np.cos(3*x[0]), x[1], x[0]*np.sin(3*x[0])])
def h2(x):
return np.array([x[0]*np.sin(2*x[0]), x[1], x[0]*np.cos(2*x[0]), x[0]**2 + x[1]**2])
###Output
_____no_output_____
###Markdown
Below, we see that $X_1$ and $X_2$ are samplings from the same underlying structure, but in $X_1$ both green (branch 1) and yellow (branch 2) branches have equal densities, and in $X_2$ green and yellow branches are sampled in a ratio 1:3. Additionally, in $X_1$ we sample for uniform $t \in [0, 1]$ but for $X_2$ we take $t = \mathcal{N}(0, 1) \text{ mod } 1$.
###Code
N = 1000
t_range_1 = np.random.uniform(0, 1, N)
X1 = np.stack([f(t) for t in t_range_1])
X1_branch = X1[:, 2]
X1 = X1[:, 0:2]
t_range_2 = np.random.normal(loc = 0, scale = 0.3, size = N) % 1
X2 = np.stack([f(t, p = 0.25) for t in t_range_2])
X2_branch = X2[:, 2]
X2 = X2[:, 0:2]
plt.figure(figsize = (10, 5))
plt.subplot(1, 2, 1)
plt.scatter(X1[:, 0], X1[:, 1], alpha = 1, c = X1_branch, s = 8)
plt.title("X_1")
plt.subplot(1, 2, 2)
plt.scatter(X2[:, 0], X2[:, 1], alpha = 1, c = X2_branch, s = 8)
plt.title("X_2")
print("X_1: prop1/prop2 = %f" % ((X1_branch == 1).mean()/(X1_branch == 2).mean()))
print("X_2: prop1/prop2 = %f" % ((X2_branch == 1).mean()/(X2_branch == 2).mean()))
###Output
X_1: prop1/prop2 = 1.000000
X_2: prop1/prop2 = 0.332657
###Markdown
We also apply nonlinear maps $h_1 : \mathbb{R}^2 \to \mathbb{R}^4$ and $h_2 : \mathbb{R}^2 \to \mathbb{R}^3$ to get the data into different domains
###Code
Y1 = h1(X1.T).T
Y2 = h2(X2.T).T
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize = (15, 7.5))
ax = fig.add_subplot(121, projection='3d')
ax.scatter3D(Y1[:, 0], Y1[:, 1], Y1[:, 2], c = X1_branch)
plt.title("Projection 1 ($h_1$)")
ax = fig.add_subplot(122, projection='3d')
ax.scatter3D(Y2[:, 0], Y2[:, 1], Y2[:, 2], c = X2_branch)
plt.title("Projection 2 ($h_2$)")
import sys
sys.path.insert(0, "./src/")
import utils as ut
import evals as evals
import scot
import importlib
importlib.reload(scot)
importlib.reload(ut)
from scot import *
###Output
_____no_output_____
###Markdown
Now we apply the $z$-score procedure as per SCOT and apply SCOT alignment with both balanced and unbalanced Gromov-Wasserstein
###Code
X1_scaled = ut.zscore_standardize(Y1)
X2_scaled = ut.zscore_standardize(Y2)
gamma_bal, _ = scot(X1_scaled, X2_scaled, k=25, e=1e-3, mode="connectivity", metric="minkowski", returnCoupling = True, balanced = True)
gamma_unbal, _ = scot(X1_scaled, X2_scaled, k=25, e=1e-3, rho = 0.01, mode="connectivity", metric="minkowski", returnCoupling = True, balanced = False)
gamma_unbal = gamma_unbal/gamma_unbal.sum()
###Output
_____no_output_____
###Markdown
Now using the obtained couplings, we project the dataset $X_1$ onto the domain of dataset $X_2$ via barycentric projection. Observe how in $X_1$, the green branch (branch 1) is overrepresented compared to its corresponding branch in $X_2$. Using balanced GW, we find that some of the excess green mass gets mapped onto the yellow branch (branch 2). Unbalanced GW avoids this.
###Code
X1_new_bal = ut.transport_data(X1_scaled,X2_scaled,gamma_bal,transposeCoupling=False)
X2_new_bal = X2_scaled
X1_new_unbal = ut.transport_data(X1_scaled,X2_scaled,gamma_unbal,transposeCoupling=False)
X2_new_unbal = X2_scaled
P = (gamma_unbal.T/gamma_unbal.sum(1)).T
X1_new_unbal = P @ X2_scaled
fig = plt.figure(figsize = (15, 5))
ax = fig.add_subplot(131, projection='3d')
ax.scatter3D(X2_new_bal[:, 0], X2_new_bal[:, 1], c = X2_branch, marker = "o")
plt.title("True")
ax = fig.add_subplot(132, projection='3d')
ax.scatter3D(X1_new_bal[:, 0], X1_new_bal[:, 1], c = X1_branch, marker = "^")
plt.title("Balanced GW")
ax = fig.add_subplot(133, projection='3d')
ax.scatter3D(X1_new_unbal[:, 0], X1_new_unbal[:, 1], c = X1_branch, marker = "^")
plt.title("Unbalanced GW")
###Output
_____no_output_____
###Markdown
Here, we let $\pi_0$ be uniform on points that are branch 1 in $X_1$, and we compute $\pi_0 P$ where $P$ is the transition matrix that we get by row-normalising the GW coupling matrix. This shows us where all the mass from branch 1 goes when we project to $X_2$ coordinates...
###Code
plt.figure(figsize = (15, 5))
plt.subplot(1, 3, 1)
pi1 = np.ones(X1_branch.shape)
pi1[X1_branch != 1] = 0
plt.scatter(X1[:, 0], X1[:, 1], c = pi1)
plt.title("Source branch")
plt.subplot(1, 3, 3)
P = (gamma_unbal.T/gamma_unbal.sum(1)).T
pi2 = pi1 @ P
mean_err = (P*(X1_branch[:, None] != X2_branch[None, :])).sum(1).mean()
plt.scatter(X2[:, 0], X2[:, 1], c = pi2, vmin = 0, vmax = np.quantile(pi2, 0.95))
plt.title("Unbalanced GW, err = %0.3f" % mean_err)
plt.subplot(1, 3, 2)
P = (gamma_bal.T/gamma_bal.sum(1)).T
pi1 = np.ones(X1_branch.shape)
pi1[X1_branch != 1] = 0
pi2 = pi1 @ P
mean_err = (P*(X1_branch[:, None] != X2_branch[None, :])).sum(1).mean()
plt.scatter(X2[:, 0], X2[:, 1], c = pi2, vmin = 0, vmax = np.quantile(pi2, 0.95))
plt.title("Balanced GW, err = %0.3f" % mean_err)
###Output
_____no_output_____ |
Tutorials/Advanced_NN/8_Optimization_for_Training_Deep_Models/8_7_Optimization_Strategies_and_Meta_Algorithms.ipynb | ###Markdown
8.7 Optimization Strategies and Meta-Algorithms---------------------------------------------------------------------you can Find me on Github:> [ GitHub](https://github.com/lev1khachatryan) Many optimization techniques are not exactly algorithms, but rather generaltemplates that can be specialized to yield algorithms, or subroutines that can beincorporated into many different algorithms. 8.7.1 Batch Normalization--------------------------------------------------------------------- Batch normalization (Ioffe and Szegedy, 2015) is one of the most exciting recentinnovations in optimizing deep neural networks and it is actually not an optimizationalgorithm at all. Instead, it is a method of adaptive reparametrization, motivatedby the difficulty of training very deep models. Very deep models involve the composition of several functions or layers. Thegradient tells how to update each parameter, under the assumption that the otherlayers do not change. In practice, we update all of the layers simultaneously.When we make the update, unexpected results can happen because many functionscomposed together are changed simultaneously, using updates that were computedunder the assumption that the other functions remain constant. As a simple example, suppose we have a deep neural network that has only one unit per layerand does not use an activation function at each hidden layer: $yˆ = xw_{1}w_{2}w_{3} . . . w_{l}$.Here, wi provides the weight used by layer i. The output of layer i is $h_{i} = h_{i−1}w_{i}$.The output $yˆ$ is a linear function of the input x, but a nonlinear function of theweights $w_{i}$. Suppose our cost function has put a gradient of 1 on $yˆ$, so we wish todecrease $yˆ$ slightly. The back-propagation algorithm can then compute a gradient$g = ∇wyˆ$. Consider what happens when we make an update $w ← w − \epsilon g$. Thefirst-order Taylor series approximation of $yˆ$ predicts that the value of $yˆ$ will decreaseby $\epsilon g^{T} g$. If we wanted to decrease $yˆ$ by 0.1, this first-order information available inthe gradient suggests we could set the learning rate $\epsilon$ to 0.1/ $g^{T}g$ . However, the actualupdate will include second-order and third-order effects, on up to effects of order l.The new value of $yˆ$ is given by An example of one second-order term arising from this update is $\epsilon^{2}g_{1} g_{2} \Pi_{i=3}^l w_i$ This term might be negligible if $\Pi_{i=3}^l w_i$ is small, or might be exponentially largeif the weights on layers 3 through l are greater than 1. This makes it very hardto choose an appropriate learning rate, because the effects of an update to theparameters for one layer depends so strongly on all of the other layers. Second-orderoptimization algorithms address this issue by computing an update that takes thesesecond-order interactions into account, but we can see that in very deep networks,even higher-order interactions can be significant. Even second-order optimizationalgorithms are expensive and usually require numerous approximations that preventthem from truly accounting for all significant second-order interactions. Buildingan n-th order optimization algorithm for n > 2 thus seems hopeless. What can wedo instead? Batch normalization provides an elegant way of reparametrizing almost any deepnetwork. The reparametrization significantly reduces the problem of coordinatingupdates across many layers. Batch normalization can be applied to any inputor hidden layer in a network. Let H be a minibatch of activations of the layerto normalize, arranged as a design matrix, with the activations for each exampleappearing in a row of the matrix. To normalize H, we replace it with where µ is a vector containing the mean of each unit and σ is a vector containingthe standard deviation of each unit. The arithmetic here is based on broadcastingthe vector µ and the vector σ to be applied to every row of the matrix H . Withineach row, the arithmetic is element-wise, so $H_{i,j}$ is normalized by subtracting $µ_{j}$ and dividing by $σ_j$. The rest of the network then operates on $H^{'}$ in exactly thesame way that the original network operated on H. At training time, and where δ is a small positive value such as $10^{−8}$ imposed to avoid encounteringthe undefined gradient of √z at z = 0. Crucially, we back-propagate throughthese operations for computing the mean and the standard deviation, and forapplying them to normalize H. This means that the gradient will never proposean operation that acts simply to increase the standard deviation or mean of$h_i$; the normalization operations remove the effect of such an action and zeroout its component in the gradient. This was a major innovation of the batchnormalization approach. Previous approaches had involved adding penalties tothe cost function to encourage units to have normalized activation statistics orinvolved intervening to renormalize unit statistics after each gradient descent step.The former approach usually resulted in imperfect normalization and the latterusually resulted in significant wasted time as the learning algorithm repeatedlyproposed changing the mean and variance and the normalization step repeatedlyundid this change. Batch normalization reparametrizes the model to make someunits always be standardized by definition, deftly sidestepping both problems. At test time, µ and σ may be replaced by running averages that were collectedduring training time. This allows the model to be evaluated on a single example,without needing to use definitions of µ and σ that depend on an entire minibatch. Revisiting the yˆ = $xw_{1}w_{2} . . . w_{l}$ example, we see that we can mostly resolve thedifficulties in learning this model by normalizing $h_{l−1}$. Suppose that x is drawnfrom a unit Gaussian. Then $h_{l−1}$ will also come from a Gaussian, because thetransformation from x to $h_{l}$ is linear. However, $h_{l−1}$ will no longer have zero meanand unit variance. After applying batch normalization, we obtain the normalized$\hat{h}_{l−1}$ that restores the zero mean and unit variance properties. For almost anyupdate to the lower layers, $\hat{h}_{l−1}$ will remain a unit Gaussian. The output yˆ maythen be learned as a simple linear function yˆ = $w_{l} \hat{h}_{l−1}$. Learning in this model isnow very simple because the parameters at the lower layers simply do not have aneffect in most cases; their output is always renormalized to a unit Gaussian. Insome corner cases, the lower layers can have an effect. Changing one of the lowerlayer weights to 0 can make the output become degenerate, and changing the sign of one of the lower weights can flip the relationship between $\hat{h}_{l−1}$ and y. Thesesituations are very rare. Without normalization, nearly every update would havean extreme effect on the statistics of $h_{l−1}$. Batch normalization has thus madethis model significantly easier to learn. In this example, the ease of learning ofcourse came at the cost of making the lower layers useless. In our linear example,the lower layers no longer have any harmful effect, but they also no longer haveany beneficial effect. This is because we have normalized out the first and secondorder statistics, which is all that a linear network can influence. In a deep neuralnetwork with nonlinear activation functions, the lower layers can perform nonlineartransformations of the data, so they remain useful. Batch normalization acts tostandardize only the mean and variance of each unit in order to stabilize learning,but allows the relationships between units and the nonlinear statistics of a singleunit to change. Because the final layer of the network is able to learn a linear transformation,we may actually wish to remove all linear relationships between units within alayer. Indeed, this is the approach taken by Desjardins et al. (2015), who providedthe inspiration for batch normalization. Unfortunately, eliminating all linearinteractions is much more expensive than standardizing the mean and standarddeviation of each individual unit, and so far batch normalization remains the mostpractical approach Normalizing the mean and standard deviation of a unit can reduce the expressivepower of the neural network containing that unit. In order to maintain theexpressive power of the network, it is common to replace the batch of hidden unitactivations H with γH' +β rather than simply the normalized H'. The variablesγ and β are learned parameters that allow the new variable to have any meanand standard deviation. At first glance, this may seem useless—why did we setthe mean to 0, and then introduce a parameter that allows it to be set back toany arbitrary value β? The answer is that the new parametrization can representthe same family of functions of the input as the old parametrization, but the newparametrization has different learning dynamics. In the old parametrization, themean of H was determined by a complicated interaction between the parametersin the layers below H. In the new parametrization, the mean of γH' + β isdetermined solely by β. The new parametrization is much easier to learn withgradient descent. Most neural network layers take the form of φ(XW + b) where φ is somefixed nonlinear activation function such as the rectified linear transformation. Itis natural to wonder whether we should apply batch normalization to the inputX , or to the transformed value XW + b. Ioffe and Szegedy (2015) recommend the latter. More specifically, XW + b should be replaced by a normalized versionof XW. The bias term should be omitted because it becomes redundant withthe β parameter applied by the batch normalization reparametrization. The inputto a layer is usually the output of a nonlinear activation function such as therectified linear function in a previous layer. The statistics of the input are thusmore non-Gaussian and less amenable to standardization by linear operations. In convolutional networks, it is important to apply thesame normalizing µ and σ at every spatial location within a feature map, so thatthe statistics of the feature map remain the same regardless of spatial location. 8.7.1.1 Batch Normalization Layers--------------------------------------------------------------------- The batch normalization methods for fully-connected layers and convolutional layers are slightly different. This is due to the dimensionality of the data generated by convolutional layers. We discuss both cases below. Note that one of the key differences between BN and other layers is that BN operates on a a full minibatch at a time (otherwise it cannot compute the mean and variance parameters per batch). 8.7.1.1.1 Fully-Connected Layers--------------------------------------------------------------------- Usually we apply the batch normalization layer between the affine transformation and the activation function in a fully-connected layer. In the following, we denote by u the input and by x=Wu+b the output of the linear transform. This yields the following variant of BN: Recall that mean and variance are computed on the same minibatch B on which the transformation is applied. Also recall that the scaling coefficient γ and the offset β are parameters that need to be learned. They ensure that the effect of batch normalization can be neutralized as needed. 8.7.1.1.2 Convolutional Layers--------------------------------------------------------------------- For convolutional layers, batch normalization occurs after the convolution computation and before the application of the activation function. If the convolution computation outputs multiple channels, we need to carry out batch normalization for each of the outputs of these channels, and each channel has an independent scale parameter and shift parameter, both of which are scalars. Assume that there are m examples in the mini-batch. On a single channel, we assume that the height and width of the convolution computation output are p and q , respectively. We need to carry out batch normalization for m×p×q elements in this channel simultaneously. While carrying out the standardization computation for these elements, we use the same mean and variance. In other words, we use the means and variances of the m×p×q elements in this channel rather than one per pixel. 8.7.1.1.3 Batch Normalization During Prediction--------------------------------------------------------------------- At prediction time, we might not have the luxury of computing offsets per batch—we might be required to make one prediction at a time. Secondly, the uncertainty in μ and σ , as arising from a minibatch are undesirable once we’ve trained the model. One way to mitigate this is to compute more stable estimates on a larger set for once (e.g. via a moving average) and then fix them at prediction time. Consequently, BN behaves differently during training and at test time (recall that dropout also behaves differently at train and test times). 8.7.1.2 Implementation from Scratch---------------------------------------------------------------------
###Code
# import d2l
# from mxnet import autograd, gluon, nd, init
# from mxnet.gluon import nn
# def batch_norm(X, gamma, beta, moving_mean, moving_var, eps, momentum):
# # Use autograd to determine whether the current mode is training mode or
# # prediction mode
# if not autograd.is_training():
# # If it is the prediction mode, directly use the mean and variance
# # obtained from the incoming moving average
# X_hat = (X - moving_mean) / nd.sqrt(moving_var + eps)
# else:
# assert len(X.shape) in (2, 4)
# if len(X.shape) == 2:
# # When using a fully connected layer, calculate the mean and
# # variance on the feature dimension
# mean = X.mean(axis=0)
# var = ((X - mean) ** 2).mean(axis=0)
# else:
# # When using a two-dimensional convolutional layer, calculate the
# # mean and variance on the channel dimension (axis=1). Here we
# # need to maintain the shape of X, so that the broadcast operation
# # can be carried out later
# mean = X.mean(axis=(0, 2, 3), keepdims=True)
# var = ((X - mean) ** 2).mean(axis=(0, 2, 3), keepdims=True)
# # In training mode, the current mean and variance are used for the
# # standardization
# X_hat = (X - mean) / nd.sqrt(var + eps)
# # Update the mean and variance of the moving average
# moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
# moving_var = momentum * moving_var + (1.0 - momentum) * var
# Y = gamma * X_hat + beta # Scale and shift
# return Y, moving_mean, moving_var
###Output
_____no_output_____
###Markdown
Now, we can customize a BatchNorm layer. This retains the scale parameter gamma and the shift parameter beta involved in gradient finding and iteration, and it also maintains the mean and variance obtained from the moving average, so that they can be used during model prediction. The num_features parameter required by the BatchNorm instance is the number of outputs for a fully-connected layer and the number of output channels for a convolutional layer. The num_dims parameter also required by this instance is 2 for a fully-connected layer and 4 for a convolutional layer.Besides the algorithm per se, also note the design pattern in implementing layers. Typically one defines the math in a separate function, say batch_norm. This is then integrated into a custom layer that mostly focuses on bookkeeping, such as moving data to the right device context, ensuring that variables are properly initialized, keeping track of the running averages for mean and variance, etc. That way we achieve a clean separation of math and boilerplate code. Also note that for the sake of convenience we did not add automagic size inference here, hence we will need to specify the number of features throughout (the Gluon version will take care of this for us).
###Code
# class BatchNorm(nn.Block):
# def __init__(self, num_features, num_dims, **kwargs):
# super(BatchNorm, self).__init__(**kwargs)
# if num_dims == 2:
# shape = (1, num_features)
# else:
# shape = (1, num_features, 1, 1)
# # The scale parameter and the shift parameter involved in gradient
# # finding and iteration are initialized to 0 and 1 respectively
# self.gamma = self.params.get('gamma', shape=shape, init=init.One())
# self.beta = self.params.get('beta', shape=shape, init=init.Zero())
# # All the variables not involved in gradient finding and iteration are
# # initialized to 0 on the CPU
# self.moving_mean = nd.zeros(shape)
# self.moving_var = nd.zeros(shape)
# def forward(self, X):
# # If X is not on the CPU, copy moving_mean and moving_var to the
# # device where X is located
# if self.moving_mean.context != X.context:
# self.moving_mean = self.moving_mean.copyto(X.context)
# self.moving_var = self.moving_var.copyto(X.context)
# # Save the updated moving_mean and moving_var
# Y, self.moving_mean, self.moving_var = batch_norm(
# X, self.gamma.data(), self.beta.data(), self.moving_mean,
# self.moving_var, eps=1e-5, momentum=0.9)
# return Y
###Output
_____no_output_____ |
notebooks/4-GRFN-demo.ipynb | ###Markdown
Pangeo demo: Getting Ready for NiSAR (GRFN)This notebook demonstrates advanced analysis of GRFN InSAR data using Pangeo cloud-based software.The images total 25Gb: over 200 unwrapped phase interferograms with 30x30m postingIn particular, we'll explore data exploration and analysis with Python tools. **The computation is running on the Google Cloud next to the data** To run each code cell, use 'shift+enter'**Warning!** you can modify this notebook, upload files, and save files listed on the left (right-click and you will see a download option). BUT!... it is an ephemeral demo. Work will be lost if you leave this idle for a bit. Everything shuts down automatically.
###Code
# Import python packages
import rasterio
from rasterio.mask import mask
import xarray as xr
import numpy as np
import hvplot.xarray
import hvplot.pandas
import holoviews as hv
import gcsfs
import intake
import pandas as pd
import geopandas as gpd
import os.path
from dask_kubernetes import KubeCluster
from dask.distributed import Client
%matplotlib inline
###Output
_____no_output_____
###Markdown
Launch a Kubernetes Clusterwe can use a kubernetes cluster to increase our computational resources10 workers are selected by default (each w/ customizable CPUs and RAM). When parallizeable computations are requested, you'll see the cluster activity on the right hand dashboards
###Code
cluster = KubeCluster(n_workers=10)
cluster
client = Client(cluster)
###Output
_____no_output_____
###Markdown
List files on Cloud Storage
###Code
# We've converted GRFN interferograms to cloud-optimized geotiffs
# And made them available in a public cloud bucket
bucket = 'grfn-hawaii-124-cog'
# This creates a virtual local file listing
fs = gcsfs.GCSFileSystem(project='pangeo-181919')
images = fs.ls(f'pangeo-data/{bucket}')
print('Number of images:', len(images))
print('First image:', images[0])
# Each of these images has an associates public URL:
# We'll use pandas to make a sorted dataframe of all the images
def parse_name(gsPath, key='date1'):
''' grab project, bucket, date1, date2, format from file name, return dictionary'''
pattern = '{project}/{bucket}/{date1:%Y%m%d}-{date2:%Y%m%d}-{format}'
parsed = intake.source.utils.reverse_format(pattern, gsPath)
val = parsed[key]
return val
def make_dataframe(images):
''' organize pandas dataframe by parsing filename'''
df = pd.DataFrame(dict(gs=images))
df = df.sort_values('gs').reset_index(drop=True)
df['url'] = 'http://storage.googleapis.com/' + df.gs.str[:]
df['date1'] = df.gs.apply(parse_name, args=('date1',))
df['date2'] = df.gs.apply(parse_name, args=('date2',))
df['dt'] = df.date1 - df.date2
return df
df = make_dataframe(images)
print('Total images:', len(df))
print('First date:', df.date2.iloc[0])
print('Last date:', df.date1.iloc[-1])
df.head()
###Output
_____no_output_____
###Markdown
Read Cloud-optimized geotiffs (COGs)
###Code
# Rasterio uses the gdal vsicurl system to access files
# on a cloud server
env = rasterio.Env(GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR',
CPL_VSIL_CURL_USE_HEAD=False,
CPL_VSIL_CURL_ALLOWED_EXTENSIONS='TIF',
)
# Read the first file in the set of images into xarray DataArray (w/ dask)
# note this is very fast b/c only metadata is downloaded to local memory
# chunks are based on cloud-optimized geotiff internal tiling
xchunk = 512
ychunk = 512
with env:
da = xr.open_rasterio(df.url[0], parse_coordinates=True, chunks={'band': 1, 'x': xchunk, 'y': ychunk})
###Output
_____no_output_____
###Markdown
Create an xarray DataSet
###Code
# Since all these images are pre-aligned ('analysis ready')
# we get best performance loading w/o metadata & coordinate checking
def create_dataset(df, chunks={'band': 1, 'x': 5120, 'y': 512}):
# Note: this takes a minute b/c coordinate alignment is checked
from ipywidgets import IntProgress
from IPython.display import display
probar = IntProgress(value=0, min=0, max=len(df), step=1,
description='Loading:')
display(probar)
#print(rasterio.env.getenv())
datasets = []
# Create dataset to fill based on first image
da = xr.open_rasterio(df.url[0],
parse_coordinates=True,
chunks=chunks)
probar.value += 1
datasets.append(da.to_dataset(name='unw'))
# Loop over remaining images to fill array
for i,row in df[1:].iterrows():
probar.value += 1
url = row.url
da = xr.open_rasterio(url, parse_coordinates=False, chunks=chunks)
datasets.append(da.to_dataset(name='unw'))
ds = xr.concat(datasets, dim='band')
ds.coords['band'] = np.arange(len(df))
return ds
with env:
DS = create_dataset(df)
print('Dataset size (Gb): ', DS.nbytes/1e9)
###Output
_____no_output_____
###Markdown
Add a coastline water mask
###Code
# Add a coastline mask to the dataset
# Land water mask (WGS84latlon epsg:4326)
gf = gpd.read_file("hawaii-gshhs.geojson")
gf.geometry.iloc[0]
# NOTE shapefle rasterization and projection from WGS84 to UTM
with rasterio.open(df.url.iloc[0]) as src:
projected = gf.to_crs(src.crs)
out_image, out_transform = mask(src, projected.geometry.values, indexes=1)
water = (out_image == 0)
DS.coords['mask'] = (('y', 'x'), water)
DSmasked = DS.where(DS.mask == False).chunk(chunks={'band': 1, 'x': 5120, 'y': 512})
DSmasked
###Output
_____no_output_____
###Markdown
Interactive visualization with holoviews**NOTE:** you may need to resize this pane to see all the buttons (drag grey separator bar to the right)* Once in an xarray DataSet, hvplot can easily display images interactively:* Note column of buttons on upper right side of figure.* In addition to buttons, there is a time slider for band selection * click slider button and use arrow keys for fine control* Box zoom button updates displayed resolution on the fly* Moving cursor over image gives coordinates and unwrapped phase value
###Code
img = DSmasked.hvplot('x', 'y', groupby='band', dynamic=True, rasterize=True,
width=700, height=500, cmap='magma')
limits = hv.streams.RangeXY(source=img)
img
###Output
_____no_output_____
###Markdown
Save current view / subsetWe can save a local copy of the current image with a function.* select band=1 in interactive image browser above * zoom into volcano deformation zone in south (bright area) * run 2 cells below to save the local image subset * a geotiff will appear in the file browser on the left * right click the file and select 'download to get it on your laptop'
###Code
def get_src(img):
''' get current image displayed '''
image_no = img.callback.args
image_url = df.url.iloc[image_no]
return image_url
def get_window(img,src):
''' get current rasterio window from holoviews plot '''
limits = img.streams[1]
if limits.x_range == None:
bounds = src.bounds
else:
bounds = (limits.x_range[0], limits.y_range[0], limits.x_range[1], limits.y_range[1])
uly,ulx = src.index(bounds[0], bounds[3])
lry,lrx = src.index(bounds[2], bounds[1])
width = lrx - ulx
height = lry - uly
return rasterio.windows.Window(ulx, uly, width, height)
def save_current_view(img, name='local-image.tif'):
from ipywidgets import IntProgress
from IPython.display import display
probar = IntProgress(value=0, min=0, max=4, step=1,
description='Saving:')
display(probar)
with env:
image_url = get_src(img)
print(f'Saving {image_url}...')
with rasterio.open(image_url) as src:
probar.value +=1
profile = src.profile.copy()
window = get_window(img, src)
print(window)
win_transform = src.window_transform(window)
probar.value +=1
data = src.read(1, window=window)
profile.update({
'dtype': 'float32',
'height': data.shape[0],
'width': data.shape[1],
'blockxsize': 256,
'blockysize': 256,
'transform': win_transform})
probar.value += 1
localname = 'subset-' + os.path.basename(src.name)
with rasterio.open(localname, 'w', **profile) as dst:
dst.write_band(1, data)
probar.value +=1
return localname
localname = save_current_view(img)
# Load and plot the saved subset to verify it's the same
# Since this is only a single file, we won't load with dask
print(localname)
with env:
with rasterio.open(localname) as src:
print(src.profile)
da = xr.open_rasterio(src.name)
da.hvplot('x', 'y', groupby='band', dynamic=True, rasterize=True,
width=700, height=500, cmap='magma')
###Output
_____no_output_____
###Markdown
Parallel computationsWith xarray DataSets, we can do parallel computations on the KubeCluster, using dask behind the scenes. Here is a simple example getting the mean phase value for each interferogram
###Code
def get_xarray_selection(img, band=False):
''' get selection dictionary from hvplot'''
selection = {}
selection['x'] = slice(*limits.x_range)
selection['y'] = slice(*limits.y_range[::-1])
if band:
selection['band'] = [img.callback.args[0],]
return selection
# Reset chunks after selection for better performance
ds = DSmasked.sel(get_xarray_selection(img))
ds = ds.chunk(dict(band=213,x=512,y=512))
ds
# Confirm we've got the same region
#ds.hvplot('x', 'y', groupby='band',dynamic=True, rasterize=True,
# width=700, height=500, cmap='magma')
# Basic Stack
# NOTE: haven't normalized to common reference point, this is just for illustration purposes
stack = ds.where(ds.mask == False).mean(dim='band')
stack
# keep in distributed cluster memory
ds_stack = stack.persist()
ds_stack.unw.plot.imshow(center=False, cmap='magma')
# Get all values of pixel at a specfic easting, northing
# compute pulls from distributed memory to local RAM
xcen = 260000
ycen = 2145000
ts = ds.sel(x=xcen, y=ycen, method='nearest').compute()
ts
s = ts.unw.to_series()
# Plot this
# Holoviews is also great for interative 2D plots
#line = s.hvplot(width=700, height=300, legend=False)
points = s.hvplot.scatter(width=700, height=300, legend=False)
label = f'Unwrapped LOS Phase [rad]: easting={xcen:g} , northing={ycen:g}'
#(line * points).relabel(label)
points.relabel(label)
# Data from plot can easily be saved to a CSV
#points.data.to_csv()
#or
s.to_csv('time-series.csv')
###Output
_____no_output_____ |
Untitled_checkpoint.ipynb | ###Markdown
###Code
# 라이브러리 사전등록
import pandas as pd
import os
import glob
import matplotlib.pyplot as plt
from matplotlib import font_manager, rc
import seaborn as sns
import csv
from collections import Counter
# 데이터 불러오기 및 정보확인
#data = pd.read_csv('black_ice.csv', thousands = ',', encoding='cp949')
#data = pd.read_csv('black_ice_processed.csv', thousands = ',', encoding='cp949')
#data = pd.read_csv('중앙선_죽령터널(노면온도).csv', thousands = ',', encoding='cp949')
data = pd.read_csv('중앙선_죽령터널(노면온도)_rm_NAN.csv', thousands = ',', encoding='cp949')
data.info()
print(len(data.index))
window_size = 5
new_data = pd.DataFrame(index = range(0, len(data.index) - window_size + 1),
columns = ['노선', '위치', '수집일시', '노면온도1', '노면온도2', '노면온도3', '노면온도4', '노면온도5',
'대기온도1', '대기온도2', '대기온도3', '대기온도4', '대기온도5',
'습도1', '습도2', '습도3', '습도4', '습도5', '기압', '풍속', '시간강수량', '6시간누적강수량',
'5시간평균노면온도', '5시간평균대기온도', '노면대기온도차', '5시간평균노면대기온도차', '노면상태'])
new_index = 0
for index in len(data.index):
if index < 5:
continue
new_data['노선'][new_index] = data['노선'][index]
new_data['위치'][new_index] = data['위치'][index]
new_data['수집일시'][new_index] = data['수집일시'][index]
new_data['노면온도1'][new_index] = data['노면온도'][new_index - index] # 5 - 5 = 0
new_data['노면온도2'][new_index] = data['노면온도'][new_index - index + 1] # 5 - 5 + 1 = 1
new_data['노면온도3'][new_index] = data['노면온도'][new_index - index + 2] # 5 - 5 + 2 = 2
new_data['노면온도4'][new_index] = data['노면온도'][new_index - index + 3] # 5 - 5 + 3 = 3
new_data['노면온도5'][new_index] = data['노면온도'][new_index - index + 4] # 5 - 5 + 4 = 4
new_data['5시간평균노면온도'][new_index] = (new_data['노면온도5'][new_index] +
new_data['노면온도4'][new_index] +
new_data['노면온도3'][new_index] +
new_data['노면온도2'][new_index] +
new_data['노면온도1'][new_index]) / 5
new_data['대기온도1'][new_index] = data['대기온도'][new_index - index] # 5 - 5 = 0
new_data['대기온도2'][new_index] = data['대기온도'][new_index - index + 1] # 5 - 5 + 1 = 1
new_data['대기온도3'][new_index] = data['대기온도'][new_index - index + 2] # 5 - 5 + 2 = 2
new_data['대기온도4'][new_index] = data['대기온도'][new_index - index + 3] # 5 - 5 + 3 = 3
new_data['대기온도5'][new_index] = data['대기온도'][new_index - index + 4] # 5 - 5 + 4 = 4
new_data['5시간평균대기온도'][new_index] = (new_data['대기온도5'][new_index] +
new_data['대기온도4'][new_index] +
new_data['대기온도3'][new_index] +
new_data['대기온도2'][new_index] +
new_data['대기온도1'][new_index]) / 5
new_data['습도1'][new_index] = data['습도'][new_index - index] # 5 - 5 = 0
new_data['습도2'][new_index] = data['습도'][new_index - index + 1] # 5 - 5 + 1 = 1
new_data['습도3'][new_index] = data['습도'][new_index - index + 2] # 5 - 5 + 2 = 2
new_data['습도4'][new_index] = data['습도'][new_index - index + 3] # 5 - 5 + 3 = 3
new_data['습도5'][new_index] = data['습도'][new_index - index + 4] # 5 - 5 + 4 = 4
new_data['노면대기온도차'][new_index] = new_data['노면온도5'][new_index] - new_data['5시간평균대기온도'][new_index]
new_data['5시간평균노면대기온도차'][new_index] = new_data['5시간평균노면온도'][new_index] - new_data['5시간평균대기온도'][new_index]
new_data['6시간누적강수량'][new_index] = data['6시간누적강수량'][index]
new_data['노면상태'][new_index] = data['노면상태'][index]
new_index = new_index + 1
###Output
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
|
ml/cc/exercises/estimators/synthetic_features_and_outliers.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Synthetic Features and Outliers **Learning Objectives:** * Create a synthetic feature that is the ratio of two other features * Use this new feature as an input to a linear regression model * Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data Let's revisit our model from the previous First Steps with TensorFlow exercise. First, we'll import the California housing data into a *pandas* `DataFrame`: Setup
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.metrics as metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
###Output
_____no_output_____
###Markdown
Next, we'll set up our input function, and define the function for model training:
###Code
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(learning_rate, steps, batch_size, input_feature):
"""Trains a linear regression model.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
Returns:
A Pandas `DataFrame` containing targets and the corresponding predictions done
after training the model.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]].astype('float32')
my_label = "median_house_value"
targets = california_housing_dataframe[my_label].astype('float32')
# Create input functions.
training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, root_mean_squared_error))
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Create a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
return calibration_data
###Output
_____no_output_____
###Markdown
Task 1: Try a Synthetic FeatureBoth the `total_rooms` and `population` features count totals for a given city block.But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of `total_rooms` and `population`.In the cell below, create a feature called `rooms_per_person`, and use that as the `input_feature` to `train_model()`.What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lowerthe final RMSE should be.) **NOTE**: You may find it helpful to add a few code cells below so you can try out several different learning rates and compare the results. To add a new code cell, hover your cursor directly below the center of this cell, and click **CODE**.
###Code
#
# YOUR CODE HERE
#
california_housing_dataframe["rooms_per_person"] =
calibration_data = train_model(
learning_rate=0.00005,
steps=500,
batch_size=5,
input_feature="rooms_per_person"
)
###Output
_____no_output_____
###Markdown
SolutionClick below for a solution.
###Code
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"])
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
###Output
_____no_output_____
###Markdown
Task 2: Identify OutliersWe can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.Use Pyplot's [`scatter()`](https://matplotlib.org/gallery/shapes_and_collections/scatter.html) to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.Do you see any oddities? Trace these back to the source data by looking at the distribution of values in `rooms_per_person`.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution.
###Code
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"])
###Output
_____no_output_____
###Markdown
The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number.If we plot a histogram of `rooms_per_person`, we find that we have a few outliers in our input data:
###Code
plt.subplot(1, 2, 2)
_ = california_housing_dataframe["rooms_per_person"].hist()
###Output
_____no_output_____
###Markdown
Task 3: Clip OutliersSee if you can further improve the model fit by setting the outlier values of `rooms_per_person` to some reasonable minimum or maximum.For reference, here's a quick example of how to apply a function to a Pandas `Series`: clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))The above `clipped_feature` will have no values less than `0`.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below for the solution. The histogram we created in Task 2 shows that the majority of values are less than `5`. Let's clip `rooms_per_person` to 5, and plot a histogram to double-check the results.
###Code
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5))
_ = california_housing_dataframe["rooms_per_person"].hist()
###Output
_____no_output_____
###Markdown
To verify that clipping worked, let's train again and print the calibration data once more:
###Code
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
_ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
###Output
_____no_output_____ |
ep02_linreg_analytic.ipynb | ###Markdown
###Code
name = "Daniel Silva Lopes da Costa" # write YOUR NAME
honorPledge = "I affirm that I have not given or received any unauthorized " \
"help on this assignment, and that this work is my own.\n"
print("\nName: ", name)
print("\nHonor pledge: ", honorPledge)
###Output
Name: Daniel Silva Lopes da Costa
Honor pledge: I affirm that I have not given or received any unauthorized help on this assignment, and that this work is my own.
###Markdown
MAC0460 / MAC5832 (2021) EP2: Linear regression - analytic solution Objectives:- to implement and test the analytic solution for the linear regression task (see, for instance, Slides of Lecture 03 and Lecture 03 of *Learning from Data*)- to understand the core idea (*optimization of a loss or cost function*) for parameter adjustment in machine learning Linear regressionGiven a dataset $\{(\mathbf{x}^{(1)}, y^{(1)}), \dots ,(\mathbf{x}^{(N)}, y^{(N)})\}$ with $\mathbf{x}^{(i)} \in \mathbb{R}^{d}$ and $y^{(i)} \in \mathbb{R}$, we would like to approximate the unknown function $f:\mathbb{R}^{d} \rightarrow \mathbb{R}$ (recall that $y^{(i)} =f(\mathbf{x}^{(i)})$) by means of a linear model $h$:$$h(\mathbf{x}^{(i)}; \mathbf{w}, b) = \mathbf{w}^\top \mathbf{x}^{(i)} + b$$Note that $h(\mathbf{x}^{(i)}; \mathbf{w}, b)$ is, in fact, an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation) of $\mathbf{x}^{(i)}$. As commonly done, we will use the term "linear" to refer to an affine transformation.The output of $h$ is a linear transformation of $\mathbf{x}^{(i)}$. We use the notation $h(\mathbf{x}^{(i)}; \mathbf{w}, b)$ to make clear that $h$ is a parametric model, i.e., the transformation $h$ is defined by the parameters $\mathbf{w}$ and $b$. We can view vector $\mathbf{w}$ as a *weight* vector that controls the effect of each *feature* in the prediction.By adding one component with value equal to 1 to the observations $\mathbf{x}$ (an artificial coordinate), we have:$$\tilde{\mathbf{x}} = (1, x_1, \ldots, x_d) \in \mathbb{R}^{1+d}$$and then we can simplify the notation:$$h(\mathbf{x}^{(i)}; \mathbf{w}) = \hat{y}^{(i)} = \mathbf{w}^\top \tilde{\mathbf{x}}^{(i)}$$We would like to determine the optimal parameters $\mathbf{w}$ such that prediction $\hat{y}^{(i)}$ is as closest as possible to $y^{(i)}$ according to some error metric. Adopting the *mean square error* as such metric we have the following cost function:\begin{equation}J(\mathbf{w}) = \frac{1}{N}\sum_{i=1}^{N}\big(\hat{y}^{(i)} - y^{(i)}\big)^{2}\end{equation}Thus, the task of determining a function $h$ that is closest to $f$ is reduced to the task of finding the values $\mathbf{w}$ that minimize $J(\mathbf{w})$.**Now we will explore this model, starting with a simple dataset.** Auxiliary functions
###Code
# some imports
import numpy as np
import time
import matplotlib.pyplot as plt
%matplotlib inline
# An auxiliary function
def get_housing_prices_data(N, verbose=True):
"""
Generates artificial linear data,
where x = square meter, y = house price
:param N: data set size
:type N: int
:param verbose: param to control print
:type verbose: bool
:return: design matrix, regression targets
:rtype: np.array, np.array
"""
cond = False
while not cond:
x = np.linspace(90, 1200, N)
gamma = np.random.normal(30, 10, x.size)
y = 50 * x + gamma * 400
x = x.astype("float32")
x = x.reshape((x.shape[0], 1))
y = y.astype("float32")
y = y.reshape((y.shape[0], 1))
cond = min(y) > 0
xmean, xsdt, xmax, xmin = np.mean(x), np.std(x), np.max(x), np.min(x)
ymean, ysdt, ymax, ymin = np.mean(y), np.std(y), np.max(y), np.min(y)
if verbose:
print("\nX shape = {}".format(x.shape))
print("y shape = {}\n".format(y.shape))
print("X: mean {}, sdt {:.2f}, max {:.2f}, min {:.2f}".format(xmean,
xsdt,
xmax,
xmin))
print("y: mean {:.2f}, sdt {:.2f}, max {:.2f}, min {:.2f}".format(ymean,
ysdt,
ymax,
ymin))
return x, y
# Another auxiliary function
def plot_points_regression(x,
y,
title,
xlabel,
ylabel,
prediction=None,
legend=False,
r_squared=None,
position=(90, 100)):
"""
Plots the data points and the prediction,
if there is one.
:param x: design matrix
:type x: np.array
:param y: regression targets
:type y: np.array
:param title: plot's title
:type title: str
:param xlabel: x axis label
:type xlabel: str
:param ylabel: y axis label
:type ylabel: str
:param prediction: model's prediction
:type prediction: np.array
:param legend: param to control print legends
:type legend: bool
:param r_squared: r^2 value
:type r_squared: float
:param position: text position
:type position: tuple
"""
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
line1, = ax.plot(x, y, 'bo', label='Real data')
if prediction is not None:
line2, = ax.plot(x, prediction, 'r', label='Predicted data')
if legend:
plt.legend(handles=[line1, line2], loc=2)
ax.set_title(title,
fontsize=20,
fontweight='bold')
if r_squared is not None:
bbox_props = dict(boxstyle="square,pad=0.3",
fc="white", ec="black", lw=0.2)
t = ax.text(position[0], position[1], "$R^2 ={:.4f}$".format(r_squared),
size=15, bbox=bbox_props)
ax.set_xlabel(xlabel, fontsize=20)
ax.set_ylabel(ylabel, fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
The dataset The first dataset we will use is a toy dataset. We will generate $N=100$ observations with only one *feature* and a real value associated to each of them. We can view these observations as being pairs *(area of a real state in square meters, price of the real state)*. Our task is to construct a model that is able to predict the price of a real state, given its area.
###Code
X, y = get_housing_prices_data(N=100)
###Output
X shape = (100, 1)
y shape = (100, 1)
X: mean 645.0, sdt 323.65, max 1200.00, min 90.00
y: mean 44003.19, sdt 17195.94, max 79998.91, min 5441.34
###Markdown
Ploting the data
###Code
plot_points_regression(X,
y,
title='Real estate prices prediction',
xlabel="m\u00b2",
ylabel='$')
###Output
_____no_output_____
###Markdown
The solutionGiven $f:\mathbb{R}^{N\times M} \rightarrow \mathbb{R}$ and $\mathbf{A} \in \mathbb{R}^{N\times M}$, we define the gradient of $f$ with respect to $\mathbf{A}$ as:\begin{equation*}\nabla_{\mathbf{A}}f = \frac{\partial f}{\partial \mathbf{A}} = \begin{bmatrix}\frac{\partial f}{\partial \mathbf{A}_{1,1}} & \dots & \frac{\partial f}{\partial \mathbf{A}_{1,m}} \\\vdots & \ddots & \vdots \\\frac{\partial f}{\partial \mathbf{A}_{n,1}} & \dots & \frac{\partial f}{\partial \mathbf{A}_{n,m}}\end{bmatrix}\end{equation*}Let $\mathbf{X} \in \mathbb{R}^{N\times d}$ be a matrix (sometimes also called the *design matrix*) whose rows are the observations of the dataset and let $\mathbf{y} \in \mathbb{R}^{N}$ be the vector consisting of all values $y^{(i)}$ (i.e., $\mathbf{X}^{(i,:)} = \mathbf{x}^{(i)}$ and $\mathbf{y}^{(i)} = y^{(i)}$). It can be verified that: \begin{equation}J(\mathbf{w}) = \frac{1}{N}(\mathbf{X}\mathbf{w} - \mathbf{y})^{T}(\mathbf{X}\mathbf{w} - \mathbf{y})\end{equation}Using basic matrix derivative concepts we can compute the gradient of $J(\mathbf{w})$ with respect to $\mathbf{w}$:\begin{equation}\nabla_{\mathbf{w}}J(\mathbf{w}) = \frac{2}{N} (\mathbf{X}^{T}\mathbf{X}\mathbf{w} -\mathbf{X}^{T}\mathbf{y}) \end{equation}Thus, when $\nabla_{\mathbf{w}}J(\mathbf{w}) = 0$ we have \begin{equation}\mathbf{X}^{T}\mathbf{X}\mathbf{w} = \mathbf{X}^{T}\mathbf{y}\end{equation}Hence,\begin{equation}\mathbf{w} = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y}\end{equation}Note that this solution has a high computational cost. As the number of variables (*features*) increases, the cost for matrix inversion becomes prohibitive. See [this text](https://sgfin.github.io/files/notes/CS229_Lecture_Notes.pdf) for more details. Exercise 1Using only **NumPy** (a quick introduction to this library can be found [here](http://cs231n.github.io/python-numpy-tutorial/)), complete the two functions below. Recall that $\mathbf{X} \in \mathbb{R}^{N\times d}$; thus you will need to add a component of value 1 to each of the observations in $\mathbf{X}$ before performing the computation described above.NOTE: Although the dataset above has data of dimension $d=1$, your code must be generic (it should work for $d\geq1$) 1.1. Weight computation function
###Code
def normal_equation_weights(X, y):
"""
Calculates the weights of a linear function using the normal equation method.
You should add into X a new column with 1s.
:param X: design matrix
:type X: np.ndarray(shape=(N, d))
:param y: regression targets
:type y: np.ndarray(shape=(N, 1))
:return: weight vector
:rtype: np.ndarray(shape=(d+1, 1))
"""
# START OF YOUR CODE:
X = np.hstack(( np.ones((X.shape[0],1)), X ) )
#print(X.T)
Xt = np.linalg.inv(np.dot(X.T, X))
w = np.dot(np.dot(Xt, X.T), y)
return w
raise NotImplementedError("Function normal_equation_weights() is not implemented")
# END OF YOUR CODE
# test of function normal_equation_weights()
w = 0 # this is not necessary
w = normal_equation_weights(X, y)
print("Estimated w =\n", w)
###Output
Estimated w =
[[10804.41058626]
[ 51.47097573]]
###Markdown
1.2. Prediction function
###Code
def normal_equation_prediction(X, w):
"""
Calculates the prediction over a set of observations X using the linear function
characterized by the weight vector w.
You should add into X a new column with 1s.
:param X: design matrix
:type X: np.ndarray(shape=(N, d))
:param w: weight vector
:type w: np.ndarray(shape=(d+1, 1))
:param y: regression prediction
:type y: np.ndarray(shape=(N, 1))
"""
# START OF YOUR CODE:
X = np.hstack(( np.ones((X.shape[0],1)), X ) )
Y = np.dot(X, w)
return Y
raise NotImplementedError("Function normal_equation_prediction() is not implemented")
# END OF YOUR CODE
###Output
_____no_output_____
###Markdown
1.3. Coefficient of determinationWe can use the [$R^2$](https://pt.wikipedia.org/wiki/R%C2%B2) metric (Coefficient of determination) to evaluate how well the linear model fits the data.**Which $𝑅^2$ value would you expect to observe ?**
###Code
from sklearn.metrics import r2_score
# test of function normal_equation_prediction()
prediction = normal_equation_prediction(X, w)
# compute the R2 score using the r2_score function from sklearn
# Replace 0 with an appropriate call of the function
# START OF YOUR CODE:
r_2 = r2_score(y, prediction)
# END OF YOUR CODE
plot_points_regression(X,
y,
title='Real estate prices prediction',
xlabel="m\u00b2",
ylabel='$',
prediction=prediction,
legend=True,
r_squared=r_2)
###Output
_____no_output_____
###Markdown
Additional testsLet us compute a prediction for $x=650$
###Code
# Let us use the prediction function
x = np.asarray([650]).reshape(1,1)
prediction = normal_equation_prediction(x, w)
print("Area = %.2f Predicted price = %.4f" %(x[0], prediction))
###Output
Area = 650.00 Predicted price = 44260.5448
###Markdown
1.4. Processing timeExperiment with different nummber of samples $N$ and observe how processing time varies.Be careful not to use a too large value; it may make jupyter freeze ...
###Code
# Add other values for N
# START OF YOUR CODE:
N = [1600]
# END OF YOUR CODE
for i in N:
X, y = get_housing_prices_data(N=i)
init = time.time()
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X,w)
init = time.time() - init
print("\nExecution time = {:.8f}(s)\n".format(init))
# Tempos de algumas análises:
# N Tempo(s)
# 100 0.00376582
# 200 0.00273514
# 400 0.00603080
# 800 0.00363159
# 1600 0.00614405
###Output
X shape = (1600, 1)
y shape = (1600, 1)
X: mean 645.0, sdt 320.63, max 1200.00, min 90.00
y: mean 44313.57, sdt 16460.92, max 81999.68, min 7103.05
Execution time = 0.00114775(s)
###Markdown
Exercise 2Let us test the code with $𝑑>1$. We will use the data we have collected in our first class. The [file](https://edisciplinas.usp.br/pluginfile.php/5982803/course/section/6115454/QT1data.csv) can be found on e-disciplinas. Let us try to predict the weight based on one or more features.
###Code
import pandas as pd
# load the dataset
df = pd.read_csv('QT1data.csv')
df.head()
df.describe()
# Our target variable is the weight
y = df.pop('Weight').values
y
###Output
_____no_output_____
###Markdown
2.1. One feature ($d=1$)We will use 'Height' as the input feature and predict the weight
###Code
feature_cols = ['Height']
X = df.loc[:, feature_cols]
X.shape
###Output
_____no_output_____
###Markdown
Write the code for computing the following- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute the $R^2$ value- plot the regression graph (use appropriate values for the parameters of function plot_points_regression())
###Code
# START OF YOUR CODE:
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
print("Erro quadrático médio:", r_2)
plot_points_regression(X,
y,
title='Predição de peso pela altura',
xlabel="altura(cm)",
ylabel='peso(kg)',
prediction=prediction,
legend=True,
r_squared=r_2)
# END OF YOUR CODE
###Output
Erro quadrático médio: 0.421924439081967
###Markdown
2.2 - Two input features ($d=2$)Now repeat the exercise with using as input the features 'Height' and 'Shoe number'- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute and print the $R^2$ valueNote that our plotting function can not be used. There is no need to do plotting here.
###Code
# START OF YOUR CODE:
feature_cols = ['Height', 'Shoe number']
X = df.loc[:, feature_cols]
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
print("Erro quadrático médio:", r_2)
# END OF YOUR CODE
###Output
Erro quadrático médio: 0.45381183096658595
###Markdown
2.3 - Three input features ($d=3$)Now try with three features. There is no need to do plotting here.- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute and print the $R^2$ value
###Code
# START OF YOUR CODE:
feature_cols = ['Height', 'Shoe number', 'Age' ]
X = df.loc[:, feature_cols]
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
print("Erro quadrático médio:", r_2)
# END OF YOUR CODE
###Output
Erro quadrático médio: 0.4776499498669615
###Markdown
MAC0460 / MAC5832 (2021) EP2: Linear regression - analytic solution Objectives:- to implement and test the analytic solution for the linear regression task (see, for instance, Slides of Lecture 03 and Lecture 03 of *Learning from Data*)- to understand the core idea (*optimization of a loss or cost function*) for parameter adjustment in machine learning Linear regressionGiven a dataset $\{(\mathbf{x}^{(1)}, y^{(1)}), \dots ,(\mathbf{x}^{(N)}, y^{(N)})\}$ with $\mathbf{x}^{(i)} \in \mathbb{R}^{d}$ and $y^{(i)} \in \mathbb{R}$, we would like to approximate the unknown function $f:\mathbb{R}^{d} \rightarrow \mathbb{R}$ (recall that $y^{(i)} =f(\mathbf{x}^{(i)})$) by means of a linear model $h$:$$h(\mathbf{x}^{(i)}; \mathbf{w}, b) = \mathbf{w}^\top \mathbf{x}^{(i)} + b$$Note that $h(\mathbf{x}^{(i)}; \mathbf{w}, b)$ is, in fact, an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation) of $\mathbf{x}^{(i)}$. As commonly done, we will use the term "linear" to refer to an affine transformation.The output of $h$ is a linear transformation of $\mathbf{x}^{(i)}$. We use the notation $h(\mathbf{x}^{(i)}; \mathbf{w}, b)$ to make clear that $h$ is a parametric model, i.e., the transformation $h$ is defined by the parameters $\mathbf{w}$ and $b$. We can view vector $\mathbf{w}$ as a *weight* vector that controls the effect of each *feature* in the prediction.By adding one component with value equal to 1 to the observations $\mathbf{x}$ (an artificial coordinate), we have:$$\tilde{\mathbf{x}} = (1, x_1, \ldots, x_d) \in \mathbb{R}^{1+d}$$and then we can simplify the notation:$$h(\mathbf{x}^{(i)}; \mathbf{w}) = \hat{y}^{(i)} = \mathbf{w}^\top \tilde{\mathbf{x}}^{(i)}$$We would like to determine the optimal parameters $\mathbf{w}$ such that prediction $\hat{y}^{(i)}$ is as closest as possible to $y^{(i)}$ according to some error metric. Adopting the *mean square error* as such metric we have the following cost function:\begin{equation}J(\mathbf{w}) = \frac{1}{N}\sum_{i=1}^{N}\big(\hat{y}^{(i)} - y^{(i)}\big)^{2}\end{equation}Thus, the task of determining a function $h$ that is closest to $f$ is reduced to the task of finding the values $\mathbf{w}$ that minimize $J(\mathbf{w})$.**Now we will explore this model, starting with a simple dataset.** Auxiliary functions
###Code
# some imports
import numpy as np
import time
import matplotlib.pyplot as plt
%matplotlib inline
# An auxiliary function
def get_housing_prices_data(N, verbose=True):
"""
Generates artificial linear data,
where x = square meter, y = house price
:param N: data set size
:type N: int
:param verbose: param to control print
:type verbose: bool
:return: design matrix, regression targets
:rtype: np.array, np.array
"""
cond = False
while not cond:
x = np.linspace(90, 1200, N)
gamma = np.random.normal(30, 10, x.size)
y = 50 * x + gamma * 400
x = x.astype("float32")
x = x.reshape((x.shape[0], 1))
y = y.astype("float32")
y = y.reshape((y.shape[0], 1))
cond = min(y) > 0
xmean, xsdt, xmax, xmin = np.mean(x), np.std(x), np.max(x), np.min(x)
ymean, ysdt, ymax, ymin = np.mean(y), np.std(y), np.max(y), np.min(y)
if verbose:
print("\nX shape = {}".format(x.shape))
print("y shape = {}\n".format(y.shape))
print("X: mean {}, sdt {:.2f}, max {:.2f}, min {:.2f}".format(xmean,
xsdt,
xmax,
xmin))
print("y: mean {:.2f}, sdt {:.2f}, max {:.2f}, min {:.2f}".format(ymean,
ysdt,
ymax,
ymin))
return x, y
# Another auxiliary function
def plot_points_regression(x,
y,
title,
xlabel,
ylabel,
prediction=None,
legend=False,
r_squared=None,
position=(90, 100)):
"""
Plots the data points and the prediction,
if there is one.
:param x: design matrix
:type x: np.array
:param y: regression targets
:type y: np.array
:param title: plot's title
:type title: str
:param xlabel: x axis label
:type xlabel: str
:param ylabel: y axis label
:type ylabel: str
:param prediction: model's prediction
:type prediction: np.array
:param legend: param to control print legends
:type legend: bool
:param r_squared: r^2 value
:type r_squared: float
:param position: text position
:type position: tuple
"""
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
line1, = ax.plot(x, y, 'bo', label='Real data')
if prediction is not None:
line2, = ax.plot(x, prediction, 'r', label='Predicted data')
if legend:
plt.legend(handles=[line1, line2], loc=2)
ax.set_title(title,
fontsize=20,
fontweight='bold')
if r_squared is not None:
bbox_props = dict(boxstyle="square,pad=0.3",
fc="white", ec="black", lw=0.2)
t = ax.text(position[0], position[1], "$R^2 ={:.4f}$".format(r_squared),
size=15, bbox=bbox_props)
ax.set_xlabel(xlabel, fontsize=20)
ax.set_ylabel(ylabel, fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
The dataset The first dataset we will use is a toy dataset. We will generate $N=100$ observations with only one *feature* and a real value associated to each of them. We can view these observations as being pairs *(area of a real state in square meters, price of the real state)*. Our task is to construct a model that is able to predict the price of a real state, given its area.
###Code
X, y = get_housing_prices_data(N=100)
###Output
X shape = (100, 1)
y shape = (100, 1)
X: mean 645.0, sdt 323.65, max 1200.00, min 90.00
y: mean 44221.50, sdt 16674.25, max 79182.43, min 12196.20
###Markdown
Ploting the data
###Code
plot_points_regression(X,
y,
title='Real estate prices prediction',
xlabel="m\u00b2",
ylabel='$')
###Output
_____no_output_____
###Markdown
The solutionGiven $f:\mathbb{R}^{N\times M} \rightarrow \mathbb{R}$ and $\mathbf{A} \in \mathbb{R}^{N\times M}$, we define the gradient of $f$ with respect to $\mathbf{A}$ as:\begin{equation*}\nabla_{\mathbf{A}}f = \frac{\partial f}{\partial \mathbf{A}} = \begin{bmatrix}\frac{\partial f}{\partial \mathbf{A}_{1,1}} & \dots & \frac{\partial f}{\partial \mathbf{A}_{1,m}} \\\vdots & \ddots & \vdots \\\frac{\partial f}{\partial \mathbf{A}_{n,1}} & \dots & \frac{\partial f}{\partial \mathbf{A}_{n,m}}\end{bmatrix}\end{equation*}Let $\mathbf{X} \in \mathbb{R}^{N\times d}$ be a matrix (sometimes also called the *design matrix*) whose rows are the observations of the dataset and let $\mathbf{y} \in \mathbb{R}^{N}$ be the vector consisting of all values $y^{(i)}$ (i.e., $\mathbf{X}^{(i,:)} = \mathbf{x}^{(i)}$ and $\mathbf{y}^{(i)} = y^{(i)}$). It can be verified that: \begin{equation}J(\mathbf{w}) = \frac{1}{N}(\mathbf{X}\mathbf{w} - \mathbf{y})^{T}(\mathbf{X}\mathbf{w} - \mathbf{y})\end{equation}Using basic matrix derivative concepts we can compute the gradient of $J(\mathbf{w})$ with respect to $\mathbf{w}$:\begin{equation}\nabla_{\mathbf{w}}J(\mathbf{w}) = \frac{2}{N} (\mathbf{X}^{T}\mathbf{X}\mathbf{w} -\mathbf{X}^{T}\mathbf{y}) \end{equation}Thus, when $\nabla_{\mathbf{w}}J(\mathbf{w}) = 0$ we have \begin{equation}\mathbf{X}^{T}\mathbf{X}\mathbf{w} = \mathbf{X}^{T}\mathbf{y}\end{equation}Hence,\begin{equation}\mathbf{w} = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y}\end{equation}Note that this solution has a high computational cost. As the number of variables (*features*) increases, the cost for matrix inversion becomes prohibitive. See [this text](https://sgfin.github.io/files/notes/CS229_Lecture_Notes.pdf) for more details. Exercise 1Using only **NumPy** (a quick introduction to this library can be found [here](http://cs231n.github.io/python-numpy-tutorial/)), complete the two functions below. Recall that $\mathbf{X} \in \mathbb{R}^{N\times d}$; thus you will need to add a component of value 1 to each of the observations in $\mathbf{X}$ before performing the computation described above.NOTE: Although the dataset above has data of dimension $d=1$, your code must be generic (it should work for $d\geq1$) 1.1. Weight computation function
###Code
def normal_equation_weights(X, y):
"""
Calculates the weights of a linear function using the normal equation method.
You should add into X a new column with 1s.
:param X: design matrix
:type X: np.ndarray(shape=(N, d))
:param y: regression targets
:type y: np.ndarray(shape=(N, 1))
:return: weight vector
:rtype: np.ndarray(shape=(d+1, 1))
"""
# START OF YOUR CODE:
N = X.shape[0]
X_tilde = np.column_stack((np.ones((N, 1)), X))
X_cross = np.dot(np.linalg.inv(np.dot(X_tilde.T, X_tilde)), X_tilde.T)
w = np.dot(X_cross, y)
return w
# END OF YOUR CODE
# test of function normal_equation_weights()
w = 0 # this is not necessary
w = normal_equation_weights(X, y)
print("Estimated w =\n", w)
###Output
Estimated w =
[[12043.47818971]
[ 49.88841379]]
###Markdown
1.2. Prediction function
###Code
def normal_equation_prediction(X, w):
"""
Calculates the prediction over a set of observations X using the linear function
characterized by the weight vector w.
You should add into X a new column with 1s.
:param X: design matrix
:type X: np.ndarray(shape=(N, d))
:param w: weight vector
:type w: np.ndarray(shape=(d+1, 1))
:param y: regression prediction
:type y: np.ndarray(shape=(N, 1))
"""
# START OF YOUR CODE:
N = X.shape[0]
X_tilde = np.column_stack((np.ones((N, 1)), X))
y = np.dot(X_tilde, w)
return y
# END OF YOUR CODE
###Output
_____no_output_____
###Markdown
1.3. Coefficient of determinationWe can use the [$R^2$](https://pt.wikipedia.org/wiki/R%C2%B2) metric (Coefficient of determination) to evaluate how well the linear model fits the data.**Which $𝑅^2$ value would you expect to observe ?**
###Code
from sklearn.metrics import r2_score
# test of function normal_equation_prediction()
prediction = normal_equation_prediction(X, w)
# compute the R2 score using the r2_score function from sklearn
# Replace 0 with an appropriate call of the function
# START OF YOUR CODE:
r_2 = r2_score(y, prediction)
# END OF YOUR CODE
plot_points_regression(X,
y,
title='Real estate prices prediction',
xlabel="m\u00b2",
ylabel='$',
prediction=prediction,
legend=True,
r_squared=r_2)
###Output
_____no_output_____
###Markdown
Additional testsLet us compute a prediction for $x=650$
###Code
# Let us use the prediction function
x = np.asarray([650]).reshape(1,1)
prediction = normal_equation_prediction(x, w)
print("Area = %.2f Predicted price = %.4f" %(x[0], prediction))
###Output
Area = 650.00 Predicted price = 44470.9472
###Markdown
1.4. Processing timeExperiment with different nummber of samples $N$ and observe how processing time varies.Be careful not to use a too large value; it may make jupyter freeze ...
###Code
# Add other values for N
# START OF YOUR CODE:
N = [100, 200, 500, 1000, 5000, 10000, 100000, 10000000]
# END OF YOUR CODE
for i in N:
X, y = get_housing_prices_data(N=i)
init = time.time()
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X,w)
init = time.time() - init
print("\nExecution time = {:.8f}(s)\n".format(init))
###Output
X shape = (100, 1)
y shape = (100, 1)
X: mean 645.0, sdt 323.65, max 1200.00, min 90.00
y: mean 44131.31, sdt 16462.71, max 77546.66, min 10220.15
Execution time = 0.00058222(s)
X shape = (200, 1)
y shape = (200, 1)
X: mean 645.0, sdt 322.04, max 1200.00, min 90.00
y: mean 44643.07, sdt 16741.32, max 77416.36, min 11328.75
Execution time = 0.00007701(s)
X shape = (500, 1)
y shape = (500, 1)
X: mean 645.0, sdt 321.07, max 1200.00, min 90.00
y: mean 44582.73, sdt 16791.18, max 83074.05, min 9172.27
Execution time = 0.00033569(s)
X shape = (1000, 1)
y shape = (1000, 1)
X: mean 645.0, sdt 320.75, max 1200.00, min 90.00
y: mean 44441.20, sdt 16539.07, max 79329.80, min 10420.06
Execution time = 0.00034523(s)
X shape = (5000, 1)
y shape = (5000, 1)
X: mean 645.0, sdt 320.49, max 1200.00, min 90.00
y: mean 44266.45, sdt 16549.26, max 80368.79, min 5516.63
Execution time = 0.00048804(s)
X shape = (10000, 1)
y shape = (10000, 1)
X: mean 645.0000610351562, sdt 320.46, max 1200.00, min 90.00
y: mean 44271.82, sdt 16555.38, max 85085.10, min 3211.80
Execution time = 0.00039077(s)
X shape = (100000, 1)
y shape = (100000, 1)
X: mean 645.0000610351562, sdt 320.43, max 1200.00, min 90.00
y: mean 44233.35, sdt 16525.27, max 85116.39, min 2534.62
Execution time = 0.00301242(s)
X shape = (10000000, 1)
y shape = (10000000, 1)
X: mean 644.9998779296875, sdt 320.43, max 1200.00, min 90.00
y: mean 44250.33, sdt 16514.00, max 89948.86, min 199.12
Execution time = 0.29421997(s)
###Markdown
Exercise 2Let us test the code with $𝑑>1$. We will use the data we have collected in our first class. The [file](https://edisciplinas.usp.br/pluginfile.php/5982803/course/section/6115454/QT1data.csv) can be found on e-disciplinas. Let us try to predict the weight based on one or more features.
###Code
import pandas as pd
# load the dataset
df = pd.read_csv('QT1data.csv')
df.head()
df.describe()
# Our target variable is the weight
y = df.pop('Weight').values
y
###Output
_____no_output_____
###Markdown
2.1. One feature ($d=1$)We will use 'Height' as the input feature and predict the weight
###Code
feature_cols = ['Height']
X = df.loc[:, feature_cols]
X.shape
###Output
_____no_output_____
###Markdown
Write the code for computing the following- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute the $R^2$ value- plot the regression graph (use appropriate values for the parameters of function plot_points_regression())
###Code
# START OF YOUR CODE:
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
plot_points_regression(X,
y,
title='Weight based on height prediction',
xlabel="Height",
ylabel='Weight',
prediction=prediction,
legend=True,
r_squared=r_2)
# END OF YOUR CODE
###Output
_____no_output_____
###Markdown
2.2 - Two input features ($d=2$)Now repeat the exercise with using as input the features 'Height' and 'Shoe number'- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute and print the $R^2$ valueNote that our plotting function can not be used. There is no need to do plotting here.
###Code
# START OF YOUR CODE:
feature_cols = ['Height', 'Shoe number']
X = df.loc[:, feature_cols]
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
print(r_2)
# END OF YOUR CODE
###Output
0.45381183096658584
###Markdown
2.3 - Three input features ($d=3$)Now try with three features. There is no need to do plotting here.- compute the regression weights using $\mathbf{X}$ and $\mathbf{y}$- compute the prediction- compute and print the $R^2$ value
###Code
# START OF YOUR CODE:
feature_cols = ['Height', 'Shoe number', 'Age']
X = df.loc[:, feature_cols]
w = normal_equation_weights(X, y)
prediction = normal_equation_prediction(X, w)
r_2 = r2_score(y, prediction)
print(r_2)
# END OF YOUR CODE
###Output
0.4776499498669615
|
docs/tutorials/line.ipynb | ###Markdown
(line)= Fitting a model to dataIf you're reading this right now then you're probably interested in usingemcee to fit a model to some noisy data.On this page, I'll demonstrate how you might do this in the simplestnon-trivial model that I could think of: fitting a line to data when youdon't believe the error bars on your data.The interested reader should check out [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686) for a much more complete discussion of howto fit a line to data in The Real World™ and why MCMC might come in handy.
###Code
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
###Output
_____no_output_____
###Markdown
The generative probabilistic modelWhen you approach a new problem, the first step is generally to write down the*likelihood function* (the probability of a dataset given the modelparameters).This is equivalent to describing the generative procedure for the data.In this case, we're going to consider a linear model where the quoteduncertainties are underestimated by a constant fractional amount.You can generate a synthetic dataset from this model:
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123)
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y = m_true * x + b_true
y += np.abs(f_true * y) * np.random.randn(N)
y += yerr * np.random.randn(N)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
x0 = np.linspace(0, 10, 500)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
The true model is shown as the thick grey line and the effect of theunderestimated uncertainties is obvious when you look at this figure.The standard way to fit a line to these data (assuming independent Gaussianerror bars) is linear least squares.Linear least squares is appealing because solving for the parameters—andtheir associated uncertainties—is simply a linear algebraic operation.Following the notation in [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686), the linear least squares solution to thesedata is
###Code
A = np.vander(x, 2)
C = np.diag(yerr * yerr)
ATA = np.dot(A.T, A / (yerr ** 2)[:, None])
cov = np.linalg.inv(ATA)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr ** 2))
print("Least-squares estimates:")
print("m = {0:.3f} ± {1:.3f}".format(w[0], np.sqrt(cov[0, 0])))
print("b = {0:.3f} ± {1:.3f}".format(w[1], np.sqrt(cov[1, 1])))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Least-squares estimates:
m = -1.104 ± 0.016
b = 5.441 ± 0.091
###Markdown
This figure shows the least-squares estimate of the line parameters as a dashed line.This isn't an unreasonable result but the uncertainties on the slope andintercept seem a little small (because of the small error bars on most of thedata points). Maximum likelihood estimationThe least squares solution found in the previous section is the maximumlikelihood result for a model where the error bars are assumed correct,Gaussian and independent.We know, of course, that this isn't the right model.Unfortunately, there isn't a generalization of least squares that supports amodel like the one that we know to be true.Instead, we need to write down the likelihood function and numericallyoptimize it.In mathematical notation, the correct likelihood function is:$$ \ln\,p(y\,|\,x,\sigma,m,b,f) = -\frac{1}{2} \sum_n \left[ \frac{(y_n-m\,x_n-b)^2}{s_n^2} + \ln \left ( 2\pi\,s_n^2 \right ) \right]$$where$$ s_n^2 = \sigma_n^2+f^2\,(m\,x_n+b)^2 \quad .$$This likelihood function is simply a Gaussian where the variance isunderestimated by some fractional amount: $f$.In Python, you would code this up as:
###Code
def log_likelihood(theta, x, y, yerr):
m, b, log_f = theta
model = m * x + b
sigma2 = yerr ** 2 + model ** 2 * np.exp(2 * log_f)
return -0.5 * np.sum((y - model) ** 2 / sigma2 + np.log(sigma2))
###Output
_____no_output_____
###Markdown
In this code snippet, you'll notice that we're using the logarithm of $f$instead of $f$ itself for reasons that will become clear in the next section.For now, it should at least be clear that this isn't a bad idea because itwill force $f$ to be always positive.A good way of finding this numerical optimum of this likelihood function is touse the [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) module:
###Code
from scipy.optimize import minimize
np.random.seed(42)
nll = lambda *args: -log_likelihood(*args)
initial = np.array([m_true, b_true, np.log(f_true)]) + 0.1 * np.random.randn(3)
soln = minimize(nll, initial, args=(x, y, yerr))
m_ml, b_ml, log_f_ml = soln.x
print("Maximum likelihood estimates:")
print("m = {0:.3f}".format(m_ml))
print("b = {0:.3f}".format(b_ml))
print("f = {0:.3f}".format(np.exp(log_f_ml)))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.plot(x0, np.dot(np.vander(x0, 2), [m_ml, b_ml]), ":k", label="ML")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Maximum likelihood estimates:
m = -1.003
b = 4.528
f = 0.454
###Markdown
It's worth noting that the optimize module *minimizes* functions whereas wewould like to maximize the likelihood.This goal is equivalent to minimizing the *negative* likelihood (or in thiscase, the negative *log* likelihood).In this figure, the maximum likelihood (ML) result is plotted as a dotted black line—compared tothe true model (grey line) and linear least-squares (LS; dashed line).That looks better!The problem now: how do we estimate the uncertainties on *m* and *b*?What's more, we probably don't really care too much about the value of *f* butit seems worthwhile to propagate any uncertainties about its value to ourfinal estimates of *m* and *b*.This is where MCMC comes in. Marginalization & uncertainty estimationThis isn't the place to get into the details of why you might want to use MCMCin your research but it is worth commenting that a common reason is that youwould like to marginalize over some "nuisance parameters" and find an estimateof the posterior probability function (the distribution of parameters that isconsistent with your dataset) for others.MCMC lets you do both of these things in one fell swoop!You need to start by writing down the posterior probability function (up to aconstant):$$ p (m,b,f\,|\,x,y,\sigma) \propto p(m,b,f)\,p(y\,|\,x,\sigma,m,b,f) \quad .$$We have already, in the previous section, written down the likelihood function$$p(y\,|\,x,\sigma,m,b,f)$$so the missing component is the "prior" function$$p(m,b,f) \quad .$$This function encodes any previous knowledge that we have about theparameters: results from other experiments, physically acceptable ranges, etc.It is necessary that you write down priors if you're going to use MCMC becauseall that MCMC does is draw samples from a probability distribution and youwant that to be a probability distribution for your parameters.This is important: **you cannot draw parameter samples from your likelihoodfunction**.This is because a likelihood function is a probability distribution **overdatasets** so, conditioned on model parameters, you can draw representativedatasets (as demonstrated at the beginning of this exercise) but you cannotdraw parameter samples.In this example, we'll use uniform (so-called "uninformative") priors on $m$,$b$ and the logarithm of $f$.For example, we'll use the following conservative prior on $m$:$$p(m) = \left \{\begin{array}{ll} 1 / 5.5 \,, & \mbox{if}\,-5 < m < 1/2 \\ 0 \,, & \mbox{otherwise} \end{array} \right .$$In code, the log-prior is (up to a constant):
###Code
def log_prior(theta):
m, b, log_f = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < log_f < 1.0:
return 0.0
return -np.inf
###Output
_____no_output_____
###Markdown
Then, combining this with the definition of ``log_likelihood`` from above, the fulllog-probability function is:
###Code
def log_probability(theta, x, y, yerr):
lp = log_prior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + log_likelihood(theta, x, y, yerr)
###Output
_____no_output_____
###Markdown
After all this setup, it's easy to sample this distribution using emcee.We'll start by initializing the walkers in a tiny Gaussian ball around themaximum likelihood result (I've found that this tends to be a pretty goodinitialization in most cases) and then run 5,000 steps of MCMC.
###Code
import emcee
pos = soln.x + 1e-4 * np.random.randn(32, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(
nwalkers, ndim, log_probability, args=(x, y, yerr)
)
sampler.run_mcmc(pos, 5000, progress=True);
###Output
100%|██████████| 5000/5000 [00:07<00:00, 712.03it/s]
###Markdown
Let's take a look at what the sampler has done.A good first step is to look at the time series of the parameters inthe chain.The samples can be accessed using the {func}`EnsembleSampler.get_chain` method.This will return an arraywith the shape `(5000, 32, 3)` giving the parameter values for each walkerat each step in the chain.The figure below shows the positions of each walker as a function of thenumber of steps in the chain:
###Code
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "b", "log(f)"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
###Output
_____no_output_____
###Markdown
As mentioned above, the walkers start in small distributions around themaximum likelihood values and then they quickly wander and start exploring thefull posterior distribution.In fact, after fewer than 50 steps, the samples seem pretty well "burnt-in".That is a hard statement to make quantitatively, but we can look at an estimateof the integrated autocorrelation time (see the {ref}`autocorr` tutorial for more details):
###Code
tau = sampler.get_autocorr_time()
print(tau)
###Output
[39.16329084 39.96660169 35.8864348 ]
###Markdown
This suggests that only about 40 steps are needed for the chain to "forget" where it started.It's not unreasonable to throw away a few times this number of steps as "burn-in".Let's discard the initial 100 steps, thin by about half the autocorrelation time (15 steps), and flatten the chain so that we have a flat list of samples:
###Code
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
###Output
(10432, 3)
###Markdown
ResultsNow that we have this list of samples, let's make one of the most useful plotsyou can make with your MCMC results: *a corner plot*.You'll need the [corner.py module](http://corner.readthedocs.io) butonce you have it, generating a corner plot is as simple as:
###Code
import corner
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, b_true, np.log(f_true)]
);
###Output
_____no_output_____
###Markdown
The corner plot shows all the one and two dimensional projections of theposterior probability distributions of your parameters.This is useful because it quickly demonstrates all of the covariances betweenparameters.Also, the way that you find the marginalized distribution for a parameter orset of parameters using the results of the MCMC chain is to project thesamples into that plane and then make an N-dimensional histogram.That means that the corner plot shows the marginalized distribution for eachparameter independently in the histograms along the diagonal and then themarginalized two dimensional distributions in the other panels.Another diagnostic plot is the projection of your results into the space ofthe observed data.To do this, you can choose a few (say 100 in this case) samples from the chainand plot them on top of the data points:
###Code
inds = np.random.randint(len(flat_samples), size=100)
for ind in inds:
sample = flat_samples[ind]
plt.plot(x0, np.dot(np.vander(x0, 2), sample[:2]), "C1", alpha=0.1)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", label="truth")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
This leaves us with one question: which numbers should go in the abstract?There are a few different options for this but my favorite is to quote theuncertainties based on the 16th, 50th, and 84th percentiles of the samples inthe marginalized distributions.To compute these numbers for this example, you would run:
###Code
from IPython.display import display, Math
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
###Output
_____no_output_____
###Markdown
(line)= Fitting a model to dataIf you're reading this right now then you're probably interested in usingemcee to fit a model to some noisy data.On this page, I'll demonstrate how you might do this in the simplestnon-trivial model that I could think of: fitting a line to data when youdon't believe the error bars on your data.The interested reader should check out [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686) for a much more complete discussion of howto fit a line to data in The Real World™ and why MCMC might come in handy.
###Code
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
###Output
_____no_output_____
###Markdown
The generative probabilistic modelWhen you approach a new problem, the first step is generally to write down the*likelihood function* (the probability of a dataset given the modelparameters).This is equivalent to describing the generative procedure for the data.In this case, we're going to consider a linear model where the quoteduncertainties are underestimated by a constant fractional amount.You can generate a synthetic dataset from this model:
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123)
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y = m_true * x + b_true
y += np.abs(f_true * y) * np.random.randn(N)
y += yerr * np.random.randn(N)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
x0 = np.linspace(0, 10, 500)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
The true model is shown as the thick grey line and the effect of theunderestimated uncertainties is obvious when you look at this figure.The standard way to fit a line to these data (assuming independent Gaussianerror bars) is linear least squares.Linear least squares is appealing because solving for the parameters—andtheir associated uncertainties—is simply a linear algebraic operation.Following the notation in [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686), the linear least squares solution to thesedata is
###Code
A = np.vander(x, 2)
C = np.diag(yerr * yerr)
ATA = np.dot(A.T, A / (yerr ** 2)[:, None])
cov = np.linalg.inv(ATA)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr ** 2))
print("Least-squares estimates:")
print("m = {0:.3f} ± {1:.3f}".format(w[0], np.sqrt(cov[0, 0])))
print("b = {0:.3f} ± {1:.3f}".format(w[1], np.sqrt(cov[1, 1])))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Least-squares estimates:
m = -1.104 ± 0.016
b = 5.441 ± 0.091
###Markdown
This figure shows the least-squares estimate of the line parameters as a dashed line.This isn't an unreasonable result but the uncertainties on the slope andintercept seem a little small (because of the small error bars on most of thedata points). Maximum likelihood estimationThe least squares solution found in the previous section is the maximumlikelihood result for a model where the error bars are assumed correct,Gaussian and independent.We know, of course, that this isn't the right model.Unfortunately, there isn't a generalization of least squares that supports amodel like the one that we know to be true.Instead, we need to write down the likelihood function and numericallyoptimize it.In mathematical notation, the correct likelihood function is:$$ \ln\,p(y\,|\,x,\sigma,m,b,f) = -\frac{1}{2} \sum_n \left[ \frac{(y_n-m\,x_n-b)^2}{s_n^2} + \ln \left ( 2\pi\,s_n^2 \right ) \right]$$where$$ s_n^2 = \sigma_n^2+f^2\,(m\,x_n+b)^2 \quad .$$This likelihood function is simply a Gaussian where the variance isunderestimated by some fractional amount: $f$.In Python, you would code this up as:
###Code
def log_likelihood(theta, x, y, yerr):
m, b, log_f = theta
model = m * x + b
sigma2 = yerr ** 2 + model ** 2 * np.exp(2 * log_f)
return -0.5 * np.sum((y - model) ** 2 / sigma2 + np.log(sigma2))
###Output
_____no_output_____
###Markdown
In this code snippet, you'll notice that we're using the logarithm of $f$instead of $f$ itself for reasons that will become clear in the next section.For now, it should at least be clear that this isn't a bad idea because itwill force $f$ to be always positive.A good way of finding this numerical optimum of this likelihood function is touse the [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) module:
###Code
from scipy.optimize import minimize
np.random.seed(42)
nll = lambda *args: -log_likelihood(*args)
initial = np.array([m_true, b_true, np.log(f_true)]) + 0.1 * np.random.randn(3)
soln = minimize(nll, initial, args=(x, y, yerr))
m_ml, b_ml, log_f_ml = soln.x
print("Maximum likelihood estimates:")
print("m = {0:.3f}".format(m_ml))
print("b = {0:.3f}".format(b_ml))
print("f = {0:.3f}".format(np.exp(log_f_ml)))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.plot(x0, np.dot(np.vander(x0, 2), [m_ml, b_ml]), ":k", label="ML")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Maximum likelihood estimates:
m = -1.003
b = 4.528
f = 0.454
###Markdown
It's worth noting that the optimize module *minimizes* functions whereas wewould like to maximize the likelihood.This goal is equivalent to minimizing the *negative* likelihood (or in thiscase, the negative *log* likelihood).In this figure, the maximum likelihood (ML) result is plotted as a dotted black line—compared tothe true model (grey line) and linear least-squares (LS; dashed line).That looks better!The problem now: how do we estimate the uncertainties on *m* and *b*?What's more, we probably don't really care too much about the value of *f* butit seems worthwhile to propagate any uncertainties about its value to ourfinal estimates of *m* and *b*.This is where MCMC comes in. Marginalization & uncertainty estimationThis isn't the place to get into the details of why you might want to use MCMCin your research but it is worth commenting that a common reason is that youwould like to marginalize over some "nuisance parameters" and find an estimateof the posterior probability function (the distribution of parameters that isconsistent with your dataset) for others.MCMC lets you do both of these things in one fell swoop!You need to start by writing down the posterior probability function (up to aconstant):$$ p (m,b,f\,|\,x,y,\sigma) \propto p(m,b,f)\,p(y\,|\,x,\sigma,m,b,f) \quad .$$We have already, in the previous section, written down the likelihood function$$p(y\,|\,x,\sigma,m,b,f)$$so the missing component is the "prior" function$$p(m,b,f) \quad .$$This function encodes any previous knowledge that we have about theparameters: results from other experiments, physically acceptable ranges, etc.It is necessary that you write down priors if you're going to use MCMC becauseall that MCMC does is draw samples from a probability distribution and youwant that to be a probability distribution for your parameters.This is important: **you cannot draw parameter samples from your likelihoodfunction**.This is because a likelihood function is a probability distribution **overdatasets** so, conditioned on model parameters, you can draw representativedatasets (as demonstrated at the beginning of this exercise) but you cannotdraw parameter samples.In this example, we'll use uniform (so-called "uninformative") priors on $m$,$b$ and the logarithm of $f$.For example, we'll use the following conservative prior on $m$:$$p(m) = \left \{\begin{array}{ll} 1 / 5.5 \,, & \mbox{if}\,-5 < m < 1/2 \\ 0 \,, & \mbox{otherwise} \end{array} \right .$$In code, the log-prior is (up to a constant):
###Code
def log_prior(theta):
m, b, log_f = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < log_f < 1.0:
return 0.0
return -np.inf
###Output
_____no_output_____
###Markdown
Then, combining this with the definition of ``log_likelihood`` from above, the fulllog-probability function is:
###Code
def log_probability(theta, x, y, yerr):
lp = log_prior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + log_likelihood(theta, x, y, yerr)
###Output
_____no_output_____
###Markdown
After all this setup, it's easy to sample this distribution using emcee.We'll start by initializing the walkers in a tiny Gaussian ball around themaximum likelihood result (I've found that this tends to be a pretty goodinitialization in most cases) and then run 5,000 steps of MCMC.
###Code
import emcee
pos = soln.x + 1e-4 * np.random.randn(32, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability, args=(x, y, yerr))
sampler.run_mcmc(pos, 5000, progress=True);
###Output
100%|██████████| 5000/5000 [00:07<00:00, 712.03it/s]
###Markdown
Let's take a look at what the sampler has done.A good first step is to look at the time series of the parameters inthe chain.The samples can be accessed using the {func}`EnsembleSampler.get_chain` method.This will return an arraywith the shape `(5000, 32, 3)` giving the parameter values for each walkerat each step in the chain.The figure below shows the positions of each walker as a function of thenumber of steps in the chain:
###Code
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "b", "log(f)"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
###Output
_____no_output_____
###Markdown
As mentioned above, the walkers start in small distributions around themaximum likelihood values and then they quickly wander and start exploring thefull posterior distribution.In fact, after fewer than 50 steps, the samples seem pretty well "burnt-in".That is a hard statement to make quantitatively, but we can look at an estimateof the integrated autocorrelation time (see the {ref}`autocorr` tutorial for more details):
###Code
tau = sampler.get_autocorr_time()
print(tau)
###Output
[39.16329084 39.96660169 35.8864348 ]
###Markdown
This suggests that only about 40 steps are needed for the chain to "forget" where it started.It's not unreasonable to throw away a few times this number of steps as "burn-in".Let's discard the initial 100 steps, thin by about half the autocorrelation time (15 steps), and flatten the chain so that we have a flat list of samples:
###Code
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
###Output
(10432, 3)
###Markdown
ResultsNow that we have this list of samples, let's make one of the most useful plotsyou can make with your MCMC results: *a corner plot*.You'll need the [corner.py module](http://corner.readthedocs.io) butonce you have it, generating a corner plot is as simple as:
###Code
import corner
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, b_true, np.log(f_true)]
);
###Output
_____no_output_____
###Markdown
The corner plot shows all the one and two dimensional projections of theposterior probability distributions of your parameters.This is useful because it quickly demonstrates all of the covariances betweenparameters.Also, the way that you find the marginalized distribution for a parameter orset of parameters using the results of the MCMC chain is to project thesamples into that plane and then make an N-dimensional histogram.That means that the corner plot shows the marginalized distribution for eachparameter independently in the histograms along the diagonal and then themarginalized two dimensional distributions in the other panels.Another diagnostic plot is the projection of your results into the space ofthe observed data.To do this, you can choose a few (say 100 in this case) samples from the chainand plot them on top of the data points:
###Code
inds = np.random.randint(len(flat_samples), size=100)
for ind in inds:
sample = flat_samples[ind]
plt.plot(x0, np.dot(np.vander(x0, 2), sample[:2]), "C1", alpha=0.1)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", label="truth")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
This leaves us with one question: which numbers should go in the abstract?There are a few different options for this but my favorite is to quote theuncertainties based on the 16th, 50th, and 84th percentiles of the samples inthe marginalized distributions.To compute these numbers for this example, you would run:
###Code
from IPython.display import display, Math
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
###Output
_____no_output_____
###Markdown
(line)= Fitting a model to dataIf you're reading this right now then you're probably interested in usingemcee to fit a model to some noisy data.On this page, I'll demonstrate how you might do this in the simplestnon-trivial model that I could think of: fitting a line to data when youdon't believe the error bars on your data.The interested reader should check out [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686) for a much more complete discussion of howto fit a line to data in The Real World™ and why MCMC might come in handy.
###Code
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
###Output
_____no_output_____
###Markdown
The generative probabilistic modelWhen you approach a new problem, the first step is generally to write down the*likelihood function* (the probability of a dataset given the modelparameters).This is equivalent to describing the generative procedure for the data.In this case, we're going to consider a linear model where the quoteduncertainties are underestimated by a constant fractional amount.You can generate a synthetic dataset from this model:
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123)
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y = m_true * x + b_true
y += np.abs(f_true * y) * np.random.randn(N)
y += yerr * np.random.randn(N)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
x0 = np.linspace(0, 10, 500)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
The true model is shown as the thick grey line and the effect of theunderestimated uncertainties is obvious when you look at this figure.The standard way to fit a line to these data (assuming independent Gaussianerror bars) is linear least squares.Linear least squares is appealing because solving for the parameters—andtheir associated uncertainties—is simply a linear algebraic operation.Following the notation in [Hogg, Bovy & Lang (2010)](https://arxiv.org/abs/1008.4686), the linear least squares solution to thesedata is
###Code
A = np.vander(x, 2)
C = np.diag(yerr * yerr)
ATA = np.dot(A.T, A / (yerr**2)[:, None])
cov = np.linalg.inv(ATA)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2))
print("Least-squares estimates:")
print("m = {0:.3f} ± {1:.3f}".format(w[0], np.sqrt(cov[0, 0])))
print("b = {0:.3f} ± {1:.3f}".format(w[1], np.sqrt(cov[1, 1])))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Least-squares estimates:
m = -1.104 ± 0.016
b = 5.441 ± 0.091
###Markdown
This figure shows the least-squares estimate of the line parameters as a dashed line.This isn't an unreasonable result but the uncertainties on the slope andintercept seem a little small (because of the small error bars on most of thedata points). Maximum likelihood estimationThe least squares solution found in the previous section is the maximumlikelihood result for a model where the error bars are assumed correct,Gaussian and independent.We know, of course, that this isn't the right model.Unfortunately, there isn't a generalization of least squares that supports amodel like the one that we know to be true.Instead, we need to write down the likelihood function and numericallyoptimize it.In mathematical notation, the correct likelihood function is:$$ \ln\,p(y\,|\,x,\sigma,m,b,f) = -\frac{1}{2} \sum_n \left[ \frac{(y_n-m\,x_n-b)^2}{s_n^2} + \ln \left ( 2\pi\,s_n^2 \right ) \right]$$where$$ s_n^2 = \sigma_n^2+f^2\,(m\,x_n+b)^2 \quad .$$This likelihood function is simply a Gaussian where the variance isunderestimated by some fractional amount: $f$.In Python, you would code this up as:
###Code
def log_likelihood(theta, x, y, yerr):
m, b, log_f = theta
model = m * x + b
sigma2 = yerr**2 + model**2 * np.exp(2 * log_f)
return -0.5 * np.sum((y - model) ** 2 / sigma2 + np.log(sigma2))
###Output
_____no_output_____
###Markdown
In this code snippet, you'll notice that we're using the logarithm of $f$instead of $f$ itself for reasons that will become clear in the next section.For now, it should at least be clear that this isn't a bad idea because itwill force $f$ to be always positive.A good way of finding this numerical optimum of this likelihood function is touse the [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) module:
###Code
from scipy.optimize import minimize
np.random.seed(42)
nll = lambda *args: -log_likelihood(*args)
initial = np.array([m_true, b_true, np.log(f_true)]) + 0.1 * np.random.randn(3)
soln = minimize(nll, initial, args=(x, y, yerr))
m_ml, b_ml, log_f_ml = soln.x
print("Maximum likelihood estimates:")
print("m = {0:.3f}".format(m_ml))
print("b = {0:.3f}".format(b_ml))
print("f = {0:.3f}".format(np.exp(log_f_ml)))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.plot(x0, np.dot(np.vander(x0, 2), [m_ml, b_ml]), ":k", label="ML")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
Maximum likelihood estimates:
m = -1.003
b = 4.528
f = 0.454
###Markdown
It's worth noting that the optimize module *minimizes* functions whereas wewould like to maximize the likelihood.This goal is equivalent to minimizing the *negative* likelihood (or in thiscase, the negative *log* likelihood).In this figure, the maximum likelihood (ML) result is plotted as a dotted black line—compared tothe true model (grey line) and linear least-squares (LS; dashed line).That looks better!The problem now: how do we estimate the uncertainties on *m* and *b*?What's more, we probably don't really care too much about the value of *f* butit seems worthwhile to propagate any uncertainties about its value to ourfinal estimates of *m* and *b*.This is where MCMC comes in. Marginalization & uncertainty estimationThis isn't the place to get into the details of why you might want to use MCMCin your research but it is worth commenting that a common reason is that youwould like to marginalize over some "nuisance parameters" and find an estimateof the posterior probability function (the distribution of parameters that isconsistent with your dataset) for others.MCMC lets you do both of these things in one fell swoop!You need to start by writing down the posterior probability function (up to aconstant):$$ p (m,b,f\,|\,x,y,\sigma) \propto p(m,b,f)\,p(y\,|\,x,\sigma,m,b,f) \quad .$$We have already, in the previous section, written down the likelihood function$$p(y\,|\,x,\sigma,m,b,f)$$so the missing component is the "prior" function$$p(m,b,f) \quad .$$This function encodes any previous knowledge that we have about theparameters: results from other experiments, physically acceptable ranges, etc.It is necessary that you write down priors if you're going to use MCMC becauseall that MCMC does is draw samples from a probability distribution and youwant that to be a probability distribution for your parameters.This is important: **you cannot draw parameter samples from your likelihoodfunction**.This is because a likelihood function is a probability distribution **overdatasets** so, conditioned on model parameters, you can draw representativedatasets (as demonstrated at the beginning of this exercise) but you cannotdraw parameter samples.In this example, we'll use uniform (so-called "uninformative") priors on $m$,$b$ and the logarithm of $f$.For example, we'll use the following conservative prior on $m$:$$p(m) = \left \{\begin{array}{ll} 1 / 5.5 \,, & \mbox{if}\,-5 < m < 1/2 \\ 0 \,, & \mbox{otherwise} \end{array} \right .$$In code, the log-prior is (up to a constant):
###Code
def log_prior(theta):
m, b, log_f = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < log_f < 1.0:
return 0.0
return -np.inf
###Output
_____no_output_____
###Markdown
Then, combining this with the definition of ``log_likelihood`` from above, the fulllog-probability function is:
###Code
def log_probability(theta, x, y, yerr):
lp = log_prior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + log_likelihood(theta, x, y, yerr)
###Output
_____no_output_____
###Markdown
After all this setup, it's easy to sample this distribution using emcee.We'll start by initializing the walkers in a tiny Gaussian ball around themaximum likelihood result (I've found that this tends to be a pretty goodinitialization in most cases) and then run 5,000 steps of MCMC.
###Code
import emcee
pos = soln.x + 1e-4 * np.random.randn(32, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(
nwalkers, ndim, log_probability, args=(x, y, yerr)
)
sampler.run_mcmc(pos, 5000, progress=True);
###Output
100%|██████████| 5000/5000 [00:07<00:00, 712.03it/s]
###Markdown
Let's take a look at what the sampler has done.A good first step is to look at the time series of the parameters inthe chain.The samples can be accessed using the {func}`EnsembleSampler.get_chain` method.This will return an arraywith the shape `(5000, 32, 3)` giving the parameter values for each walkerat each step in the chain.The figure below shows the positions of each walker as a function of thenumber of steps in the chain:
###Code
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "b", "log(f)"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
###Output
_____no_output_____
###Markdown
As mentioned above, the walkers start in small distributions around themaximum likelihood values and then they quickly wander and start exploring thefull posterior distribution.In fact, after fewer than 50 steps, the samples seem pretty well "burnt-in".That is a hard statement to make quantitatively, but we can look at an estimateof the integrated autocorrelation time (see the {ref}`autocorr` tutorial for more details):
###Code
tau = sampler.get_autocorr_time()
print(tau)
###Output
[39.16329084 39.96660169 35.8864348 ]
###Markdown
This suggests that only about 40 steps are needed for the chain to "forget" where it started.It's not unreasonable to throw away a few times this number of steps as "burn-in".Let's discard the initial 100 steps, thin by about half the autocorrelation time (15 steps), and flatten the chain so that we have a flat list of samples:
###Code
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
###Output
(10432, 3)
###Markdown
ResultsNow that we have this list of samples, let's make one of the most useful plotsyou can make with your MCMC results: *a corner plot*.You'll need the [corner.py module](http://corner.readthedocs.io) butonce you have it, generating a corner plot is as simple as:
###Code
import corner
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, b_true, np.log(f_true)]
);
###Output
_____no_output_____
###Markdown
The corner plot shows all the one and two dimensional projections of theposterior probability distributions of your parameters.This is useful because it quickly demonstrates all of the covariances betweenparameters.Also, the way that you find the marginalized distribution for a parameter orset of parameters using the results of the MCMC chain is to project thesamples into that plane and then make an N-dimensional histogram.That means that the corner plot shows the marginalized distribution for eachparameter independently in the histograms along the diagonal and then themarginalized two dimensional distributions in the other panels.Another diagnostic plot is the projection of your results into the space ofthe observed data.To do this, you can choose a few (say 100 in this case) samples from the chainand plot them on top of the data points:
###Code
inds = np.random.randint(len(flat_samples), size=100)
for ind in inds:
sample = flat_samples[ind]
plt.plot(x0, np.dot(np.vander(x0, 2), sample[:2]), "C1", alpha=0.1)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", label="truth")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
This leaves us with one question: which numbers should go in the abstract?There are a few different options for this but my favorite is to quote theuncertainties based on the 16th, 50th, and 84th percentiles of the samples inthe marginalized distributions.To compute these numbers for this example, you would run:
###Code
from IPython.display import display, Math
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
###Output
_____no_output_____ |
docs/notebooks/Repairer.ipynb | ###Markdown
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("UJTf7cW0idI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in the [chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb).
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer is instantiated with the debugger used (`middle_debugger`):```python>>> middle_repairer = Repairer(middle_debugger)```The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).```python>>> tree, fitness = middle_repairer.repair()```The returned AST `tree` can be output via `ast.unparse()`:```python>>> print(ast.unparse(tree))def middle(x, y, z): if y < z: if x < y: return y elif x < z: return x elif x > y: return y elif x > z: return x return z```The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.```python>>> fitness1.0```Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful.Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e., how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:
###Code
from StatisticalDebugger import middle
# ignore
from bookutils import print_content
# ignore
import inspect
# ignore
_, first_lineno = inspect.getsourcelines(middle)
middle_source = inspect.getsource(middle)
print_content(middle_source, '.py', start_line_number=first_lineno)
###Output
708 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
709 [34mif[39;49;00m y < z:
710 [34mif[39;49;00m x < y:
711 [34mreturn[39;49;00m y
712 [34melif[39;49;00m x < z:
713 [34mreturn[39;49;00m y
714 [34melse[39;49;00m:
715 [34mif[39;49;00m x > y:
716 [34mreturn[39;49;00m y
717 [34melif[39;49;00m x > z:
718 [34mreturn[39;49;00m x
719 [34mreturn[39;49;00m z
###Markdown
In most cases, `middle()` just runs fine:
###Code
middle(4, 5, 6)
###Output
_____no_output_____
###Markdown
In some other cases, though, it does not work correctly:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:
###Code
def middle_sort_of_fixed(x, y, z): # type: ignore
return x
###Output
_____no_output_____
###Markdown
You will concur that the failure no longer occurs:
###Code
middle_sort_of_fixed(2, 1, 3)
###Output
_____no_output_____
###Markdown
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The common approach for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks. Note that running the test suite commonly takes the most time of automated repair, so a large test suite also comes with extra cost. Let us first focus on achieving high-quality repairs. Hence, we will use the extensive test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):
###Code
from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES
###Output
_____no_output_____
###Markdown
The `middle_test()` function fails whenever `middle()` returns an incorrect result:
###Code
def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
from ExpectError import ExpectError
with ExpectError():
middle_test(2, 1, 3)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_62204/3661663124.py", line 2, in <module>
middle_test(2, 1, 3)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_62204/40742806.py", line 3, in middle_test
assert m == sorted([x, y, z])[1]
AssertionError (expected)
###Markdown
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
###Code
from StatisticalDebugger import OchiaiDebugger, RankingDebugger
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
We see that the upper half of the `middle()` code is definitely more suspicious:
###Code
middle_debugger
###Output
_____no_output_____
###Markdown
The most suspicious line is:
###Code
# ignore
location = middle_debugger.rank()[0]
(func_name, lineno) = location
lines, first_lineno = inspect.getsourcelines(middle)
print(lineno, end="")
print_content(lines[lineno - first_lineno], '.py')
###Output
713 [34mreturn[39;49;00m y
###Markdown
with a suspiciousness of:
###Code
# ignore
middle_debugger.suspiciousness(location)
###Output
_____no_output_____
###Markdown
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations:
###Code
import string
string.ascii_letters
len(string.ascii_letters + '_') * \
len(string.ascii_letters + '_' + string.digits) * \
len(string.ascii_letters + '_' + string.digits)
###Output
_____no_output_____
###Markdown
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. This insight has been dubbed the *plastic surgery hypothesis*: content of new code can often be assembled out of fragments of code that already exist in the code base \citeBarr2014}. For our "plastic surgery", we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function.
###Code
import ast
import inspect
from bookutils import print_content, show_ast
def middle_tree() -> ast.AST:
return ast.parse(inspect.getsource(middle))
show_ast(middle_tree())
###Output
_____no_output_____
###Markdown
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST.
###Code
print(ast.dump(middle_tree()))
###Output
Module(body=[FunctionDef(name='middle', args=arguments(posonlyargs=[], args=[arg(arg='x'), arg(arg='y'), arg(arg='z')], kwonlyargs=[], kw_defaults=[], defaults=[]), body=[If(test=Compare(left=Name(id='y', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[])])], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='x', ctx=Load()))], orelse=[])])]), Return(value=Name(id='z', ctx=Load()))], decorator_list=[])], type_ignores=[])
###Markdown
This is the path to the first `return` statement:
###Code
ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore
###Output
_____no_output_____
###Markdown
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast).
###Code
from ast import NodeVisitor
# ignore
from typing import Any, Callable, Optional, Type, Tuple
from typing import Dict, Union, Set, List, cast
class StatementVisitor(NodeVisitor):
"""Visit all statements within function defs in an AST"""
def __init__(self) -> None:
self.statements: List[Tuple[ast.AST, str]] = []
self.func_name = ""
self.statements_seen: Set[Tuple[ast.AST, str]] = set()
super().__init__()
def add_statements(self, node: ast.AST, attr: str) -> None:
elems: List[ast.AST] = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems] # type: ignore
for elem in elems:
stmt = (elem, self.func_name)
if stmt in self.statements_seen:
continue
self.statements.append(stmt)
self.statements_seen.add(stmt)
def visit_node(self, node: ast.AST) -> None:
# Any node other than the ones listed below
self.add_statements(node, 'body')
self.add_statements(node, 'orelse')
def visit_Module(self, node: ast.Module) -> None:
# Module children are defs, classes and globals - don't add
super().generic_visit(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
# Class children are defs and globals - don't add
super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> None:
self.visit_node(node)
super().generic_visit(node)
def visit_FunctionDef(self,
node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None:
if not self.func_name:
self.func_name = node.name
self.visit_node(node)
super().generic_visit(node)
self.func_name = ""
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
return self.visit_FunctionDef(node)
###Output
_____no_output_____
###Markdown
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class.
###Code
def all_statements_and_functions(tree: ast.AST,
tp: Optional[Type] = None) -> \
List[Tuple[ast.AST, str]]:
"""
Return a list of pairs (`statement`, `function`) for all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
visitor = StatementVisitor()
visitor.visit(tree)
statements = visitor.statements
if tp is not None:
statements = [s for s in statements if isinstance(s[0], tp)]
return statements
def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]:
"""
Return a list of all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]
###Output
_____no_output_____
###Markdown
Here are all the `return` statements in `middle()`:
###Code
all_statements(middle_tree(), ast.Return)
all_statements_and_functions(middle_tree(), ast.If)
###Output
_____no_output_____
###Markdown
We can randomly pick an element:
###Code
import random
random_node = random.choice(all_statements(middle_tree()))
ast.unparse(random_node)
###Output
_____no_output_____
###Markdown
Mutating StatementsThe main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). The constructor provides various keyword arguments to configure the mutator.
###Code
from ast import NodeTransformer
import copy
class StatementMutator(NodeTransformer):
"""Mutate statements in an AST for automated repair."""
def __init__(self,
suspiciousness_func:
Optional[Callable[[Tuple[Callable, int]], float]] = None,
source: Optional[List[ast.AST]] = None,
log: bool = False) -> None:
"""
Constructor.
`suspiciousness_func` is a function that takes a location
(function, line_number) and returns a suspiciousness value
between 0 and 1.0. If not given, all locations get the same
suspiciousness of 1.0.
`source` is a list of statements to choose from.
"""
super().__init__()
self.log = log
if suspiciousness_func is None:
def suspiciousness_func(location: Tuple[Callable, int]) -> float:
return 1.0
assert suspiciousness_func is not None
self.suspiciousness_func: Callable = suspiciousness_func
if source is None:
source = []
self.source = source
if self.log > 1:
for i, node in enumerate(self.source):
print(f"Source for repairs #{i}:")
print_content(ast.unparse(node), '.py')
print()
print()
self.mutations = 0
###Output
_____no_output_____
###Markdown
Choosing Suspicious Statements to MutateWe start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization.
###Code
import warnings
class StatementMutator(StatementMutator):
def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float:
if not hasattr(stmt, 'lineno'):
warnings.warn(f"{self.format_node(stmt)}: Expected line number")
return 0.0
suspiciousness = self.suspiciousness_func((func_name, stmt.lineno))
if suspiciousness is None: # not executed
return 0.0
return suspiciousness
def format_node(self, node: ast.AST) -> str:
...
###Output
_____no_output_____
###Markdown
The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen.
###Code
class StatementMutator(StatementMutator):
def node_to_be_mutated(self, tree: ast.AST) -> ast.AST:
statements = all_statements_and_functions(tree)
assert len(statements) > 0, "No statements"
weights = [self.node_suspiciousness(stmt, func_name)
for stmt, func_name in statements]
stmts = [stmt for stmt, func_name in statements]
if self.log > 1:
print("Weights:")
for i, stmt in enumerate(statements):
node, func_name = stmt
print(f"{weights[i]:.2} {self.format_node(node)}")
if sum(weights) == 0.0:
# No suspicious line
return random.choice(stmts)
else:
return random.choices(stmts, weights=weights)[0]
###Output
_____no_output_____
###Markdown
Choosing a Mutation Method The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node.According to the rules of `NodeTransformer`, the mutation method can return* a new node or a list of nodes, replacing the current node;* `None`, deleting it; or* the node itself, keeping things as they are.
###Code
import re
RE_SPACE = re.compile(r'[ \t\n]+')
class StatementMutator(StatementMutator):
def choose_op(self) -> Callable:
return random.choice([self.insert, self.swap, self.delete])
def visit(self, node: ast.AST) -> ast.AST:
super().visit(node) # Visits (and transforms?) children
if not node.mutate_me: # type: ignore
return node
op = self.choose_op()
new_node = op(node)
self.mutations += 1
if self.log:
print(f"{node.lineno:4}:{op.__name__ + ':':7} "
f"{self.format_node(node)} "
f"becomes {self.format_node(new_node)}")
return new_node
###Output
_____no_output_____
###Markdown
Swapping StatementsOur first mutator is `swap()`, which replaces the current node `NODE` by a random node found in `source` (using a newly defined `choose_statement()`).As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```pythonif P: BODY```we thus only insert ```pythonif P: pass```since the statements in `BODY` have a later chance to get inserted. The same holds for all constructs that have a `BODY`, i.e. `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def choose_statement(self) -> ast.AST:
return copy.deepcopy(random.choice(self.source))
class StatementMutator(StatementMutator):
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` with a random node from `source`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt):
# The source `if P: X` is added as `if P: pass`
if hasattr(new_node, 'body'):
new_node.body = [ast.Pass()] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Inserting StatementsOur next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node `NODE`. (If `NODE` is a `return` statement, then we insert the new node _before_ `NODE`.)If the statement to be inserted has the form```pythonif P: BODY```we only insert the "header" of the `if`, resulting in```pythonif P: NODE```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]:
"""Insert a random node from `source` after `node`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'):
# Inserting `if P: X` as `if P:`
new_node.body = [node] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
# Only insert before `return`, not after it
if isinstance(node, ast.Return):
if isinstance(new_node, ast.Return):
return new_node
else:
return [new_node, node]
return [node, new_node]
###Output
_____no_output_____
###Markdown
Deleting StatementsOur last mutator is `delete()`, which deletes the current node `NODE`. The standard case is to replace `NODE` by a `pass` statement.If the statement to be deleted has the form```pythonif P: BODY```we only delete the "header" of the `if`, resulting in```pythonBODY```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more. If the statement to be deleted has multiple branches, a random branch is chosen (e.g., the `else` branch of an `if` statement).
###Code
class StatementMutator(StatementMutator):
def delete(self, node: ast.AST) -> None:
"""Delete `node`."""
branches = [attr for attr in ['body', 'orelse', 'finalbody']
if hasattr(node, attr) and getattr(node, attr)]
if branches:
# Replace `if P: S` by `S`
branch = random.choice(branches)
new_node = getattr(node, branch)
return new_node
if isinstance(node, ast.stmt):
# Avoid empty bodies; make this a `pass` statement
new_node = ast.Pass()
ast.copy_location(new_node, node)
return new_node
return None # Just delete
from bookutils import quiz
quiz("Why are statements replaced by `pass` rather than deleted?",
[
"Because `if P: pass` is valid Python, while `if P:` is not",
"Because in Python, bodies for `if`, `while`, etc. cannot be empty",
"Because a `pass` node makes a target for future mutations",
"Because it causes the tests to pass"
], '[3 ^ n for n in range(3)]')
###Output
_____no_output_____
###Markdown
Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further. HelpersFor logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node.
###Code
class StatementMutator(StatementMutator):
NODE_MAX_LENGTH = 20
def format_node(self, node: ast.AST) -> str:
"""Return a string representation for `node`."""
if node is None:
return "None"
if isinstance(node, list):
return "; ".join(self.format_node(elem) for elem in node)
s = RE_SPACE.sub(' ', ast.unparse(node)).strip()
if len(s) > self.NODE_MAX_LENGTH - len("..."):
s = s[:self.NODE_MAX_LENGTH] + "..."
return repr(s)
###Output
_____no_output_____
###Markdown
All TogetherLet us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation.
###Code
class StatementMutator(StatementMutator):
def mutate(self, tree: ast.AST) -> ast.AST:
"""Mutate the given AST `tree` in place. Return mutated tree."""
assert isinstance(tree, ast.AST)
tree = copy.deepcopy(tree)
if not self.source:
self.source = all_statements(tree)
for node in ast.walk(tree):
node.mutate_me = False # type: ignore
node = self.node_to_be_mutated(tree)
node.mutate_me = True # type: ignore
self.mutations = 0
tree = self.visit(tree)
if self.mutations == 0:
warnings.warn("No mutations found")
ast.fix_missing_locations(tree)
return tree
###Output
_____no_output_____
###Markdown
Here are a number of transformations applied by `StatementMutator`:
###Code
mutator = StatementMutator(log=True)
for i in range(10):
new_tree = mutator.mutate(middle_tree())
###Output
9:insert: 'return y' becomes 'return y'
8:insert: 'if x > y: return y e...' becomes 'if x < y: if x > y: ...'
12:insert: 'return z' becomes 'if y < z: return z...'
3:swap: 'if x < y: return y e...' becomes 'return x'
3:swap: 'if x < y: return y e...' becomes 'return z'
3:swap: 'if x < y: return y e...' becomes 'return x'
11:swap: 'return x' becomes 'return y'
10:insert: 'if x > z: return x...' becomes 'if x > z: return x...'; 'return z'
12:delete: 'return z' becomes 'pass'
8:swap: 'if x > y: return y e...' becomes 'if y < z: pass'
###Markdown
This is the effect of the last mutator applied on `middle`:
###Code
print_content(ast.unparse(new_tree), '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m y < z:
[34mpass[39;49;00m
[34mreturn[39;49;00m z
###Markdown
FitnessNow that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate. Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests.
###Code
WEIGHT_PASSING = 0.99
WEIGHT_FAILING = 0.01
def middle_fitness(tree: ast.AST) -> float:
"""Compute fitness of a `middle()` candidate given in `tree`"""
original_middle = middle
try:
code = compile(tree, '<fitness>', 'exec')
except ValueError:
return 0 # Compilation error
exec(code, globals())
passing_passed = 0
failing_passed = 0
# Test how many of the passing runs pass
for x, y, z in MIDDLE_PASSING_TESTCASES:
try:
middle_test(x, y, z)
passing_passed += 1
except AssertionError:
pass
passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES)
# Test how many of the failing runs pass
for x, y, z in MIDDLE_FAILING_TESTCASES:
try:
middle_test(x, y, z)
failing_passed += 1
except AssertionError:
pass
failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES)
fitness = (WEIGHT_PASSING * passing_ratio +
WEIGHT_FAILING * failing_ratio)
globals()['middle'] = original_middle
return fitness
###Output
_____no_output_____
###Markdown
Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones).
###Code
middle_fitness(middle_tree())
###Output
_____no_output_____
###Markdown
Our "sort of fixed" version of `middle()` gets a much lower fitness:
###Code
middle_fitness(ast.parse("def middle(x, y, z): return x"))
###Output
_____no_output_____
###Markdown
In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)
###Code
from StatisticalDebugger import middle_fixed
middle_fixed_source = \
inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip()
middle_fitness(ast.parse(middle_fixed_source))
###Output
_____no_output_____
###Markdown
PopulationWe now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012}).
###Code
POPULATION_SIZE = 40
middle_mutator = StatementMutator()
MIDDLE_POPULATION = [middle_tree()] + \
[middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]
###Output
_____no_output_____
###Markdown
We sort the fix candidates according to their fitness. This actually runs all tests on all candidates.
###Code
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
###Output
_____no_output_____
###Markdown
The candidate with the highest fitness is still our original (faulty) `middle()` code:
###Code
print(ast.unparse(MIDDLE_POPULATION[0]),
middle_fitness(MIDDLE_POPULATION[0]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
elif x > y:
return y
elif x > z:
return x
return z 0.99
###Markdown
At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:
###Code
print(ast.unparse(MIDDLE_POPULATION[-1]),
middle_fitness(MIDDLE_POPULATION[-1]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
else:
return y
return z 0.5445
###Markdown
EvolutionTo evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates.
###Code
def evolve_middle() -> None:
global MIDDLE_POPULATION
source = all_statements(middle_tree())
mutator = StatementMutator(source=source)
n = len(MIDDLE_POPULATION)
offspring: List[ast.AST] = []
while len(offspring) < n:
parent = random.choice(MIDDLE_POPULATION)
offspring.append(mutator.mutate(parent))
MIDDLE_POPULATION += offspring
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
MIDDLE_POPULATION = MIDDLE_POPULATION[:n]
###Output
_____no_output_____
###Markdown
This is what happens when evolving our population for the first time; the original source is still our best candidate.
###Code
evolve_middle()
tree = MIDDLE_POPULATION[0]
print(ast.unparse(tree), middle_fitness(tree))
# docassert
assert middle_fitness(tree) < 1.0
###Output
_____no_output_____
###Markdown
However, nothing keeps us from evolving for a few generations more...
###Code
for i in range(50):
evolve_middle()
best_middle_tree = MIDDLE_POPULATION[0]
fitness = middle_fitness(best_middle_tree)
print(f"\rIteration {i:2}: fitness = {fitness} ", end="")
if fitness >= 1.0:
break
# docassert
assert middle_fitness(best_middle_tree) >= 1.0
###Output
_____no_output_____
###Markdown
Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:
###Code
print_content(ast.unparse(best_middle_tree), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mif[39;49;00m x < z:
5 [34mreturn[39;49;00m y
6 [34melif[39;49;00m x < z:
7 [34mreturn[39;49;00m x
8 [34melif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melse[39;49;00m:
11 [34mif[39;49;00m x > z:
12 [34mreturn[39;49;00m x
13 [34mreturn[39;49;00m z
14 [34mreturn[39;49;00m z
###Markdown
... and yes, it passes all tests:
###Code
original_middle = middle
code = compile(best_middle_tree, '<string>', 'exec')
exec(code, globals())
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
middle_test(x, y, z)
middle = original_middle
###Output
_____no_output_____
###Markdown
As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix. However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements.
###Code
quiz("Some of the lines in our fix candidate are redundant. "
"Which are these?",
[
"Line 3: `if x < y:`",
"Line 4: `if x < z:`",
"Line 5: `return y`",
"Line 13: `return z`"
], '[eval(chr(100 - x)) for x in [48, 50]]')
###Output
_____no_output_____
###Markdown
Simplifying As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements. The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
from DeltaDebugger import DeltaDebugger
middle_lines = ast.unparse(best_middle_tree).strip().split('\n')
def test_middle_lines(lines: List[str]) -> None:
source = "\n".join(lines)
tree = ast.parse(source)
assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0
with DeltaDebugger() as dd:
test_middle_lines(middle_lines)
reduced_lines = dd.min_args()['lines']
reduced_source = "\n".join(reduced_lines)
repaired_source = ast.unparse(ast.parse(reduced_source)) # normalize
print_content(repaired_source, '.py')
# docassert
assert len(reduced_lines) < len(middle_lines)
###Output
_____no_output_____
###Markdown
Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:
###Code
original_source = ast.unparse(ast.parse(middle_source)) # normalize
from ChangeDebugger import diff, print_patch # minor dependency
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m87[39;49;00m,[34m37[39;49;00m +[34m87[39;49;00m,[34m37[39;49;00m @@
x < z:
- [34mreturn[39;49;00m y
+ [34mreturn[39;49;00m x
[34melif[39;49;00m
###Markdown
We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code. CrossoverSo far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" children, we pick a _crossover point_ and exchange the strands at this very point: We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as```pythoncrossover = CrossoverOperator()crossover.crossover(tree_p1, tree_p2)```where `tree_p1` and `tree_p2` are two ASTs that are changed in place. Excursion: Implementing Crossover Crossing Statement Lists Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:
###Code
def p1(): # type: ignore
a = 1
b = 2
c = 3
def p2(): # type: ignore
x = 1
y = 2
z = 3
###Output
_____no_output_____
###Markdown
Then a crossover operation would produce one child with a body```pythona = 1y = 2z = 3```and another child with a body```pythonx = 1b = 2c = 3``` We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`.
###Code
class CrossoverOperator:
"""A class for performing statement crossover of Python programs"""
def __init__(self, log: bool = False):
"""Constructor. If `log` is set, turn on logging."""
self.log = log
def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \
Tuple[List[ast.AST], List[ast.AST]]:
"""Crossover the statement lists `body_1` x `body_2`. Return new lists."""
assert isinstance(body_1, list)
assert isinstance(body_2, list)
crossover_point_1 = len(body_1) // 2
crossover_point_2 = len(body_2) // 2
return (body_1[:crossover_point_1] + body_2[crossover_point_2:],
body_2[:crossover_point_2] + body_1[crossover_point_1:])
###Output
_____no_output_____
###Markdown
Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:
###Code
tree_p1: ast.Module = ast.parse(inspect.getsource(p1))
tree_p2: ast.Module = ast.parse(inspect.getsource(p2))
body_p1 = tree_p1.body[0].body # type: ignore
body_p2 = tree_p2.body[0].body # type: ignore
body_p1
crosser = CrossoverOperator()
tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
x = [34m1[39;49;00m
b = [34m2[39;49;00m
c = [34m3[39;49;00m
###Markdown
Applying Crossover on ProgramsApplying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program functionality, other than introducing errors due to dependencies.
###Code
class CrossoverOperator(CrossoverOperator):
# In modules and class defs, the ordering of elements does not matter (much)
SKIP_LIST = {ast.Module, ast.ClassDef}
def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool:
if any(isinstance(tree, cls) for cls in self.SKIP_LIST):
return False
body = getattr(tree, body_attr, [])
return body and len(body) >= 2
###Output
_____no_output_____
###Markdown
Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes two ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.`) and $l_2$ (from `t2.`).If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise* If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$.* Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs.`crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool:
"""
Crossover the bodies `body_attr` of two trees `t1` and `t2`.
Return True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
assert isinstance(body_attr, str)
if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None):
return False
if self.crossover_branches(t1, t2):
return True
if self.log > 1:
print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}")
body_1 = getattr(t1, body_attr)
body_2 = getattr(t2, body_attr)
# If both trees have the attribute, we can cross their bodies
if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr):
if self.log:
print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}")
new_body_1, new_body_2 = self.cross_bodies(body_1, body_2)
setattr(t1, body_attr, new_body_1)
setattr(t2, body_attr, new_body_2)
return True
# Strategy 1: Find matches in class/function of same name
for child_1 in body_1:
if hasattr(child_1, 'name'):
for child_2 in body_2:
if (hasattr(child_2, 'name') and
child_1.name == child_2.name):
if self.crossover_attr(child_1, child_2, body_attr):
return True
# Strategy 2: Find matches anywhere
for child_1 in random.sample(body_1, len(body_1)):
for child_2 in random.sample(body_2, len(body_2)):
if self.crossover_attr(child_1, child_2, body_attr):
return True
return False
###Output
_____no_output_____
###Markdown
We have a special case for `if` nodes, where we can cross their body and `else` branches. (In Python, `for` and `while` also have `else` branches, but swapping these with loop bodies is likely to create havoc.)
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool:
"""Special case:
`t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'`
becomes
`t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1`
Returns True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and
hasattr(t2, 'body') and hasattr(t2, 'orelse')):
t1 = cast(ast.If, t1) # keep mypy happy
t2 = cast(ast.If, t2)
if self.log:
print(f"Crossing branches {t1} x {t2}")
t1.body, t1.orelse, t2.body, t2.orelse = \
t2.orelse, t2.body, t1.orelse, t1.body
return True
return False
###Output
_____no_output_____
###Markdown
The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]:
"""Do a crossover of ASTs `t1` and `t2`.
Raises `CrossoverError` if no crossover is found."""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
for body_attr in ['body', 'orelse', 'finalbody']:
if self.crossover_attr(t1, t2, body_attr):
return t1, t2
raise CrossoverError("No crossover found")
class CrossoverError(ValueError):
pass
###Output
_____no_output_____
###Markdown
End of Excursion Crossover in Action Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:
###Code
def p1(): # type: ignore
if True:
print(1)
print(2)
print(3)
def p2(): # type: ignore
if True:
print(a)
print(b)
else:
print(c)
print(d)
###Output
_____no_output_____
###Markdown
We invoke the `crossover()` method with two ASTs from `p1` and `p2`:
###Code
crossover = CrossoverOperator()
tree_p1 = ast.parse(inspect.getsource(p1))
tree_p2 = ast.parse(inspect.getsource(p2))
crossover.crossover(tree_p1, tree_p2);
###Output
_____no_output_____
###Markdown
Here is the crossed offspring, mixing statement lists of `p1` and `p2`:
###Code
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34melse[39;49;00m:
[36mprint[39;49;00m([34m1[39;49;00m)
[36mprint[39;49;00m([34m2[39;49;00m)
[36mprint[39;49;00m([34m3[39;49;00m)
###Markdown
Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`.
###Code
middle_t1, middle_t2 = crossover.crossover(middle_tree(),
ast.parse(inspect.getsource(p2)))
###Output
_____no_output_____
###Markdown
We see how the resulting offspring encompasses elements of both sources:
###Code
print_content(ast.unparse(middle_t1), '.py')
print_content(ast.unparse(middle_t2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34melif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
###Markdown
A Repairer ClassSo far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate:```pythondebugger = OchiaiDebugger()with debugger: with debugger: ...repairer = Repairer(debugger)repairer.repair()``` Excursion: Implementing Repairer The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below.
###Code
from StackInspector import StackInspector # minor dependency
class Repairer(StackInspector):
"""A class for automatic repair of Python programs"""
def __init__(self, debugger: RankingDebugger, *,
targets: Optional[List[Any]] = None,
sources: Optional[List[Any]] = None,
log: Union[bool, int] = False,
mutator_class: Type = StatementMutator,
crossover_class: Type = CrossoverOperator,
reducer_class: Type = DeltaDebugger,
globals: Optional[Dict[str, Any]] = None):
"""Constructor.
`debugger`: a `RankingDebugger` to take tests and coverage from.
`targets`: a list of functions/modules to be repaired.
(default: the covered functions in `debugger`, except tests)
`sources`: a list of functions/modules to take repairs from.
(default: same as `targets`)
`globals`: if given, a `globals()` dict for executing targets
(default: `globals()` of caller)"""
assert isinstance(debugger, RankingDebugger)
self.debugger = debugger
self.log = log
if targets is None:
targets = self.default_functions()
if not targets:
raise ValueError("No targets to repair")
if sources is None:
sources = self.default_functions()
if not sources:
raise ValueError("No sources to take repairs from")
if self.debugger.function() is None:
raise ValueError("Multiple entry points observed")
self.target_tree: ast.AST = self.parse(targets)
self.source_tree: ast.AST = self.parse(sources)
self.log_tree("Target code to be repaired:", self.target_tree)
if ast.dump(self.target_tree) != ast.dump(self.source_tree):
self.log_tree("Source code to take repairs from:",
self.source_tree)
self.fitness_cache: Dict[str, float] = {}
self.mutator: StatementMutator = \
mutator_class(
source=all_statements(self.source_tree),
suspiciousness_func=self.debugger.suspiciousness,
log=(self.log >= 3))
self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3))
self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3))
if globals is None:
globals = self.caller_globals() # see below
self.globals = globals
###Output
_____no_output_____
###Markdown
When we access or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`. Helper FunctionsThe constructor uses a number of helper functions to create its environment.
###Code
class Repairer(Repairer):
def getsource(self, item: Union[str, Any]) -> str:
"""Get the source for `item`. Can also be a string."""
if isinstance(item, str):
item = self.globals[item]
return inspect.getsource(item)
class Repairer(Repairer):
def default_functions(self) -> List[Callable]:
"""Return the set of functions to be repaired.
Functions whose names start or end in `test` are excluded."""
def is_test(name: str) -> bool:
return name.startswith('test') or name.endswith('test')
return [func for func in self.debugger.covered_functions()
if not is_test(func.__name__)]
class Repairer(Repairer):
def log_tree(self, description: str, tree: Any) -> None:
"""Print out `tree` as source code prefixed by `description`."""
if self.log:
print(description)
print_content(ast.unparse(tree), '.py')
print()
print()
class Repairer(Repairer):
def parse(self, items: List[Any]) -> ast.AST:
"""Read in a list of items into a single tree"""
tree = ast.parse("")
for item in items:
if isinstance(item, str):
item = self.globals[item]
item_lines, item_first_lineno = inspect.getsourcelines(item)
try:
item_tree = ast.parse("".join(item_lines))
except IndentationError:
# inner function or likewise
warnings.warn(f"Can't parse {item.__name__}")
continue
ast.increment_lineno(item_tree, item_first_lineno - 1)
tree.body += item_tree.body
return tree
###Output
_____no_output_____
###Markdown
Running TestsNow that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected.
###Code
class Repairer(Repairer):
def run_test_set(self, test_set: str, validate: bool = False) -> int:
"""
Run given `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
If `validate` is set, check expectations.
Return number of passed tests.
"""
passed = 0
collectors = self.debugger.collectors[test_set]
function = self.debugger.function()
assert function is not None
# FIXME: function may have been redefined
for c in collectors:
if self.log >= 4:
print(f"Testing {c.id()}...", end="")
try:
function(**c.args())
except Exception as err:
if self.log >= 4:
print(f"failed ({err.__class__.__name__})")
if validate and test_set == self.debugger.PASS:
raise err.__class__(
f"{c.id()} should have passed, but failed")
continue
passed += 1
if self.log >= 4:
print("passed")
if validate and test_set == self.debugger.FAIL:
raise FailureNotReproducedError(
f"{c.id()} should have failed, but passed")
return passed
class FailureNotReproducedError(ValueError):
pass
###Output
_____no_output_____
###Markdown
Here is how we use `run_tests_set()`:
###Code
repairer = Repairer(middle_debugger)
assert repairer.run_test_set(middle_debugger.PASS) == \
len(MIDDLE_PASSING_TESTCASES)
assert repairer.run_test_set(middle_debugger.FAIL) == 0
###Output
_____no_output_____
###Markdown
The method `run_tests()` runs passing and failing tests, weighing the passed test cases to obtain the overall fitness.
###Code
class Repairer(Repairer):
def weight(self, test_set: str) -> float:
"""
Return the weight of `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
"""
return {
self.debugger.PASS: WEIGHT_PASSING,
self.debugger.FAIL: WEIGHT_FAILING
}[test_set]
def run_tests(self, validate: bool = False) -> float:
"""Run passing and failing tests, returning weighted fitness."""
fitness = 0.0
for test_set in [self.debugger.PASS, self.debugger.FAIL]:
passed = self.run_test_set(test_set, validate=validate)
ratio = passed / len(self.debugger.collectors[test_set])
fitness += self.weight(test_set) * ratio
return fitness
###Output
_____no_output_____
###Markdown
The method `validate()` ensures the observed tests can be adequately reproduced.
###Code
class Repairer(Repairer):
def validate(self) -> None:
fitness = self.run_tests(validate=True)
assert fitness == self.weight(self.debugger.PASS)
repairer = Repairer(middle_debugger)
repairer.validate()
###Output
_____no_output_____
###Markdown
(Re)defining FunctionsOur `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness.
###Code
class Repairer(Repairer):
def fitness(self, tree: ast.AST) -> float:
"""Test `tree`, returning its fitness"""
key = cast(str, ast.dump(tree))
if key in self.fitness_cache:
return self.fitness_cache[key]
# Save defs
original_defs: Dict[str, Any] = {}
for name in self.toplevel_defs(tree):
if name in self.globals:
original_defs[name] = self.globals[name]
else:
warnings.warn(f"Couldn't find definition of {repr(name)}")
assert original_defs, f"Couldn't find any definition"
if self.log >= 3:
print("Repair candidate:")
print_content(ast.unparse(tree), '.py')
print()
# Create new definition
try:
code = compile(tree, '<Repairer>', 'exec')
except ValueError: # Compilation error
code = None
if code is None:
if self.log >= 3:
print(f"Fitness = 0.0 (compilation error)")
fitness = 0.0
return fitness
# Execute new code, defining new functions in `self.globals`
exec(code, self.globals)
# Set new definitions in the namespace (`__globals__`)
# of the function we will be calling.
function = self.debugger.function()
assert function is not None
assert hasattr(function, '__globals__')
for name in original_defs:
function.__globals__[name] = self.globals[name] # type: ignore
fitness = self.run_tests(validate=False)
# Restore definitions
for name in original_defs:
function.__globals__[name] = original_defs[name] # type: ignore
self.globals[name] = original_defs[name]
if self.log >= 3:
print(f"Fitness = {fitness}")
self.fitness_cache[key] = fitness
return fitness
###Output
_____no_output_____
###Markdown
The helper function `toplevel_defs()` helps saving and restoring the environment before and after redefining the function under repair.
###Code
class Repairer(Repairer):
def toplevel_defs(self, tree: ast.AST) -> List[str]:
"""Return a list of names of defined functions and classes in `tree`"""
visitor = DefinitionVisitor()
visitor.visit(tree)
assert hasattr(visitor, 'definitions')
return visitor.definitions
class DefinitionVisitor(NodeVisitor):
def __init__(self) -> None:
self.definitions: List[str] = []
def add_definition(self, node: Union[ast.ClassDef,
ast.FunctionDef,
ast.AsyncFunctionDef]) -> None:
self.definitions.append(node.name)
def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
self.add_definition(node)
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
self.add_definition(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
self.add_definition(node)
###Output
_____no_output_____
###Markdown
Here's an example for `fitness()`:
###Code
repairer = Repairer(middle_debugger, log=1)
good_fitness = repairer.fitness(middle_tree())
good_fitness
# docassert
assert good_fitness >= 0.99, "fitness() failed"
bad_middle_tree = ast.parse("def middle(x, y, z): return x")
bad_fitness = repairer.fitness(bad_middle_tree)
bad_fitness
# docassert
assert bad_fitness < 0.5, "fitness() failed"
###Output
_____no_output_____
###Markdown
RepairingNow for the actual `repair()` method, which creates a `population` and then evolves it until the fitness is 1.0 or the given number of iterations is spent.
###Code
import traceback
class Repairer(Repairer):
def initial_population(self, size: int) -> List[ast.AST]:
"""Return an initial population of size `size`"""
return [self.target_tree] + \
[self.mutator.mutate(copy.deepcopy(self.target_tree))
for i in range(size - 1)]
def repair(self, population_size: int = POPULATION_SIZE, iterations: int = 100) -> \
Tuple[ast.AST, float]:
"""
Repair the function we collected test runs from.
Use a population size of `population_size` and
at most `iterations` iterations.
Returns a pair (`ast`, `fitness`) where
`ast` is the AST of the repaired function, and
`fitness` is its fitness (between 0 and 1.0)
"""
self.validate()
population = self.initial_population(population_size)
last_key = ast.dump(self.target_tree)
for iteration in range(iterations):
population = self.evolve(population)
best_tree = population[0]
fitness = self.fitness(best_tree)
if self.log:
print(f"Evolving population: "
f"iteration{iteration:4}/{iterations} "
f"fitness = {fitness:.5} \r", end="")
if self.log >= 2:
best_key = ast.dump(best_tree)
if best_key != last_key:
print()
print()
self.log_tree(f"New best code (fitness = {fitness}):",
best_tree)
last_key = best_key
if fitness >= 1.0:
break
if self.log:
print()
if self.log and self.log < 2:
self.log_tree(f"Best code (fitness = {fitness}):", best_tree)
best_tree = self.reduce(best_tree)
fitness = self.fitness(best_tree)
self.log_tree(f"Reduced code (fitness = {fitness}):", best_tree)
return best_tree, fitness
###Output
_____no_output_____
###Markdown
EvolvingThe evolution of our population takes place in the `evolve()` method. In contrast to the `evolve_middle()` function, above, we use crossover to create the offspring, which we still mutate afterwards.
###Code
class Repairer(Repairer):
def evolve(self, population: List[ast.AST]) -> List[ast.AST]:
"""Evolve the candidate population by mutating and crossover."""
n = len(population)
# Create offspring as crossover of parents
offspring: List[ast.AST] = []
while len(offspring) < n:
parent_1 = copy.deepcopy(random.choice(population))
parent_2 = copy.deepcopy(random.choice(population))
try:
self.crossover.crossover(parent_1, parent_2)
except CrossoverError:
pass # Just keep parents
offspring += [parent_1, parent_2]
# Mutate offspring
offspring = [self.mutator.mutate(tree) for tree in offspring]
# Add it to population
population += offspring
# Keep the fitter part of the population
population.sort(key=self.fitness_key, reverse=True)
population = population[:n]
return population
###Output
_____no_output_____
###Markdown
A second difference is that we not only sort by fitness, but also by tree size – with equal fitness, a smaller tree thus will be favored. This helps keeping fixes and patches small.
###Code
class Repairer(Repairer):
def fitness_key(self, tree: ast.AST) -> Tuple[float, int]:
"""Key to be used for sorting the population"""
tree_size = len([node for node in ast.walk(tree)])
return (self.fitness(tree), -tree_size)
###Output
_____no_output_____
###Markdown
SimplifyingThe last step in repairing is simplifying the code. As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of superfluous statements. To this end, we convert the tree to lines, run delta debugging on them, and then convert it back to a tree.
###Code
class Repairer(Repairer):
def reduce(self, tree: ast.AST) -> ast.AST:
"""Simplify `tree` using delta debugging."""
original_fitness = self.fitness(tree)
source_lines = ast.unparse(tree).split('\n')
with self.reducer:
self.test_reduce(source_lines, original_fitness)
reduced_lines = self.reducer.min_args()['source_lines']
reduced_source = "\n".join(reduced_lines)
return ast.parse(reduced_source)
###Output
_____no_output_____
###Markdown
As dicussed above, we simplify the code by having the test function (`test_reduce()`) declare reaching the maximum fitness obtained so far as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
class Repairer(Repairer):
def test_reduce(self, source_lines: List[str], original_fitness: float) -> None:
"""Test function for delta debugging."""
try:
source = "\n".join(source_lines)
tree = ast.parse(source)
fitness = self.fitness(tree)
assert fitness < original_fitness
except AssertionError:
raise
except SyntaxError:
raise
except IndentationError:
raise
except Exception:
# traceback.print_exc() # Uncomment to see internal errors
raise
###Output
_____no_output_____
###Markdown
End of Excursion Repairer in ActionLet us go and apply `Repairer` in practice. We initialize it with `middle_debugger`, which has (still) collected the passing and failing runs for `middle_test()`. We also set `log` for some diagnostics along the way.
###Code
repairer = Repairer(middle_debugger, log=True)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We now invoke `repair()` to evolve our population. After a few iterations, we find a best tree with perfect fitness.
###Code
best_tree, fitness = repairer.repair()
print_content(ast.unparse(best_tree), '.py')
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Again, we have a perfect solution. Here, we did not even need to simplify the code in the last iteration, as our `fitness_key()` function favors smaller implementations. Removing HTML MarkupLet us apply `Repairer` on our other ongoing example, namely `remove_html_markup()`.
###Code
def remove_html_markup(s): # type: ignore
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
def remove_html_markup_tree() -> ast.AST:
return ast.parse(inspect.getsource(remove_html_markup))
###Output
_____no_output_____
###Markdown
To run `Repairer` on `remove_html_markup()`, we need a test and a test suite. `remove_html_markup_test()` raises an exception if applying `remove_html_markup()` on the given `html` string does not yield the `plain` string.
###Code
def remove_html_markup_test(html: str, plain: str) -> None:
outcome = remove_html_markup(html)
assert outcome == plain, \
f"Got {repr(outcome)}, expected {repr(plain)}"
###Output
_____no_output_____
###Markdown
Now for the test suite. We use a simple fuzzing scheme to create dozens of passing and failing test cases in `REMOVE_HTML_PASSING_TESTCASES` and `REMOVE_HTML_FAILING_TESTCASES`, respectively. Excursion: Creating HTML Test Cases
###Code
def random_string(length: int = 5, start: int = ord(' '), end: int = ord('~')) -> str:
return "".join(chr(random.randrange(start, end + 1)) for i in range(length))
random_string()
def random_id(length: int = 2) -> str:
return random_string(start=ord('a'), end=ord('z'))
random_id()
def random_plain() -> str:
return random_string().replace('<', '').replace('>', '')
def random_string_noquotes() -> str:
return random_string().replace('"', '').replace("'", '')
def random_html(depth: int = 0) -> Tuple[str, str]:
prefix = random_plain()
tag = random_id()
if depth > 0:
html, plain = random_html(depth - 1)
else:
html = plain = random_plain()
attr = random_id()
value = '"' + random_string_noquotes() + '"'
postfix = random_plain()
return f'{prefix}<{tag} {attr}={value}>{html}</{tag}>{postfix}', \
prefix + plain + postfix
random_html()
def remove_html_testcase(expected: bool = True) -> Tuple[str, str]:
while True:
html, plain = random_html()
outcome = (remove_html_markup(html) == plain)
if outcome == expected:
return html, plain
REMOVE_HTML_TESTS = 100
REMOVE_HTML_PASSING_TESTCASES = \
[remove_html_testcase(True) for i in range(REMOVE_HTML_TESTS)]
REMOVE_HTML_FAILING_TESTCASES = \
[remove_html_testcase(False) for i in range(REMOVE_HTML_TESTS)]
###Output
_____no_output_____
###Markdown
End of Excursion Here is a passing test case:
###Code
REMOVE_HTML_PASSING_TESTCASES[0]
html, plain = REMOVE_HTML_PASSING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
Here is a failing test case (containing a double quote in the plain text)
###Code
REMOVE_HTML_FAILING_TESTCASES[0]
with ExpectError():
html, plain = REMOVE_HTML_FAILING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_62204/2578453007.py", line 3, in <module>
remove_html_markup_test(html, plain)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_62204/700130947.py", line 3, in remove_html_markup_test
assert outcome == plain, \
AssertionError: Got '3AGe7!%H</qcguk>6azh_', expected '3AGe7"!%H6azh_' (expected)
###Markdown
We run our tests, collecting the outcomes in `html_debugger`.
###Code
html_debugger = OchiaiDebugger()
for html, plain in (REMOVE_HTML_PASSING_TESTCASES +
REMOVE_HTML_FAILING_TESTCASES):
with html_debugger:
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
The suspiciousness distribution will not be of much help here – pretty much all lines in `remove_html_markup()` have the same suspiciousness.
###Code
html_debugger
###Output
_____no_output_____
###Markdown
Let us create our repairer and run it.
###Code
html_repairer = Repairer(html_debugger, log=True)
best_tree, fitness = html_repairer.repair(iterations=20)
# docassert
assert fitness < 1.0
###Output
_____no_output_____
###Markdown
We see that the "best" code is still our original code, with no changes. And we can set `iterations` to 50, 100, 200... – our `Repairer` won't be able to repair it.
###Code
quiz("Why couldn't `Repairer()` repair `remove_html_markup()`?",
[
"The population is too small!",
"The suspiciousness is too evenly distributed!",
"We need more test cases!",
"We need more iterations!",
"There is no statement in the source with a correct condition!",
"The population is too big!",
], '5242880 >> 20')
###Output
_____no_output_____
###Markdown
You can explore all of the hypotheses above by changing the appropriate parameters, but you won't be able to change the outcome. The problem is that, unlike `middle()`, there is no statement (or combination thereof) in `remove_html_markup()` that could be used to make the failure go away. For this, we need to mutate another aspect of the code, which we will explore in the next section. Mutating ConditionsThe `Repairer` class is very configurable. The individual steps in automated repair can all be replaced by providing own classes in the keyword arguments of its `__init__()` constructor:* To change fault localization, pass a different `debugger` that is a subclass of `RankingDebugger`.* To change the mutation operator, set `mutator_class` to a subclass of `StatementMutator`.* To change the crossover operator, set `crossover_class` to a subclass of `CrossoverOperator`.* To change the reduction algorithm, set `reducer_class` to a subclass of `Reducer`.In this section, we will explore how to extend the mutation operator such that it can mutate _conditions_ for control constructs such as `if`, `while`, or `for`. To this end, we introduce a new class `ConditionMutator` subclassing `StatementMutator`. Collecting ConditionsLet us start with a few simple supporting functions. The function `all_conditions()` retrieves all control conditions from an AST.
###Code
def all_conditions(trees: Union[ast.AST, List[ast.AST]],
tp: Optional[Type] = None) -> List[ast.expr]:
"""
Return all conditions from the AST (or AST list) `trees`.
If `tp` is given, return only elements of that type.
"""
if not isinstance(trees, list):
assert isinstance(trees, ast.AST)
trees = [trees]
visitor = ConditionVisitor()
for tree in trees:
visitor.visit(tree)
conditions = visitor.conditions
if tp is not None:
conditions = [c for c in conditions if isinstance(c, tp)]
return conditions
###Output
_____no_output_____
###Markdown
`all_conditions()` uses a `ConditionVisitor` class to walk the tree and collect the conditions:
###Code
class ConditionVisitor(NodeVisitor):
def __init__(self) -> None:
self.conditions: List[ast.expr] = []
self.conditions_seen: Set[str] = set()
super().__init__()
def add_conditions(self, node: ast.AST, attr: str) -> None:
elems = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems]
elems = cast(List[ast.expr], elems)
for elem in elems:
elem_str = ast.unparse(elem)
if elem_str not in self.conditions_seen:
self.conditions.append(elem)
self.conditions_seen.add(elem_str)
def visit_BoolOp(self, node: ast.BoolOp) -> ast.AST:
self.add_conditions(node, 'values')
return super().generic_visit(node)
def visit_UnaryOp(self, node: ast.UnaryOp) -> ast.AST:
if isinstance(node.op, ast.Not):
self.add_conditions(node, 'operand')
return super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> ast.AST:
if hasattr(node, 'test'):
self.add_conditions(node, 'test')
return super().generic_visit(node)
###Output
_____no_output_____
###Markdown
Here are all the conditions in `remove_html_markup()`. This is some material to construct new conditions from.
###Code
[ast.unparse(cond).strip()
for cond in all_conditions(remove_html_markup_tree())]
###Output
_____no_output_____
###Markdown
Mutating ConditionsHere comes our `ConditionMutator` class. We subclass from `StatementMutator` and set an attribute `self.conditions` containing all the conditions in the source. The method `choose_condition()` randomly picks a condition.
###Code
class ConditionMutator(StatementMutator):
"""Mutate conditions in an AST"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are as with `StatementMutator` constructor."""
super().__init__(*args, **kwargs)
self.conditions = all_conditions(self.source)
if self.log:
print("Found conditions",
[ast.unparse(cond).strip()
for cond in self.conditions])
def choose_condition(self) -> ast.expr:
"""Return a random condition from source."""
return copy.deepcopy(random.choice(self.conditions))
###Output
_____no_output_____
###Markdown
The actual mutation takes place in the `swap()` method. If the node to be replaced has a `test` attribute (i.e. a controlling predicate), then we pick a random condition `cond` from the source and randomly chose from:* **set**: We change `test` to `cond`.* **not**: We invert `test`.* **and**: We replace `test` by `cond and test`.* **or**: We replace `test` by `cond or test`.Over time, this might lead to operators propagating across the population.
###Code
class ConditionMutator(ConditionMutator):
def choose_bool_op(self) -> str:
return random.choice(['set', 'not', 'and', 'or'])
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` condition by a condition from `source`"""
if not hasattr(node, 'test'):
return super().swap(node)
node = cast(ast.If, node)
cond = self.choose_condition()
new_test = None
choice = self.choose_bool_op()
if choice == 'set':
new_test = cond
elif choice == 'not':
new_test = ast.UnaryOp(op=ast.Not(), operand=node.test)
elif choice == 'and':
new_test = ast.BoolOp(op=ast.And(), values=[cond, node.test])
elif choice == 'or':
new_test = ast.BoolOp(op=ast.Or(), values=[cond, node.test])
else:
raise ValueError("Unknown boolean operand")
if new_test:
# ast.copy_location(new_test, node)
node.test = new_test
return node
###Output
_____no_output_____
###Markdown
We can use the mutator just like `StatementMutator`, except that some of the mutations will also include new conditions:
###Code
mutator = ConditionMutator(source=all_statements(remove_html_markup_tree()),
log=True)
for i in range(10):
new_tree = mutator.mutate(remove_html_markup_tree())
###Output
2:insert: 'tag = False' becomes 'for c in s: tag = Fa...'
10:insert: 'tag = False' becomes 'tag = False'; 'out = out + c'
8:insert: 'tag = True' becomes 'if c == \'"\' or (c ==...'
12:insert: 'quote = not quote' becomes 'quote = not quote'; 'tag = True'
10:delete: 'tag = False' becomes 'pass'
12:insert: 'quote = not quote' becomes "if c == '>' and (not..."
3:insert: 'quote = False' becomes 'quote = False'; "out = ''"
14:swap: 'out = out + c' becomes 'quote = False'
12:insert: 'quote = not quote' becomes 'for c in s: quote = ...'
3:delete: 'quote = False' becomes 'pass'
###Markdown
Let us put our new mutator to action, again in a `Repairer()`. To activate it, all we need to do is to pass it as `mutator_class` keyword argument.
###Code
condition_repairer = Repairer(html_debugger,
mutator_class=ConditionMutator,
log=2)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
We might need more iterations for this one. Let us see...
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
repaired_source = ast.unparse(best_tree)
print_content(repaired_source, '.py')
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Success again! We have automatically repaired `remove_html_markup()` – the resulting code passes all tests, including those that were previously failing. Again, we can present the fix as a patch:
###Code
original_source = ast.unparse(remove_html_markup_tree())
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m210[39;49;00m,[34m53[39;49;00m +[34m210[39;49;00m,[34m39[39;49;00m @@
lse
- [34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
+ [34melif[39;49;00m tag [35mand[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
###Markdown
However, looking at the patch, one may come up with doubts.
###Code
quiz("Is this actually the best solution?",
[
"Yes, sure, of course. Why?",
"Err - what happened to single quotes?"
], 1 << 1)
###Output
_____no_output_____
###Markdown
Indeed – our solution does not seem to handle single quotes anymore. Why is that so?
###Code
quiz("Why aren't single quotes handled in the solution?",
[
"Because they're not important. "
"I mean, y'know, who uses 'em anyway?",
"Because they are not part of our tests? "
"Let me look up how they are constructed..."
], 1 << 1)
###Output
_____no_output_____
###Markdown
Correct! Our test cases do not include single quotes – at least not in the interior of HTML tags – and thus, automatic repair did not care to preserve their handling. How can we fix this? An easy way is to include an appropriate test case in our set – a test case that passes with the original `remove_html_markup()`, yet fails with the "repaired" `remove_html_markup()` as whosn above.
###Code
with html_debugger:
remove_html_markup_test("<foo quote='>abc'>me</foo>", "me")
###Output
_____no_output_____
###Markdown
Let us repeat the repair with the extended test set:
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
###Output
Evolving population: iteration 2/200 fitness = 1.0
New best code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
tag = [34mFalse[39;49;00m
[34mreturn[39;49;00m out
Reduced code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
Here is the final tree:
###Code
print_content(ast.unparse(best_tree), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
And here is its fitness:
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
The revised candidate now passes _all_ tests (including the tricky quote test we added last). Its condition now properly checks for `tag` _and_ both quotes. (The `tag` inside the parentheses is still redundant, but so be it.) From this example, we can learn a few lessons about the possibilities and risks of automated repair:* First, automatic repair is highly dependent on the quality of the checking tests. The risk is that the repair may overspecialize towards the test.* Second, when based on "plastic surgery", automated repair is highly dependent on the sources that program fragments are chosen from. If there is a hint of a solution somewhere in the code, there is a chance that automated repair will catch it up.* Third, automatic repair is a deeply heuristic approach. Its behavior will vary widely with any change to the parameters (and the underlying random number generators).* Fourth, automatic repair can take a long time. The examples we have in this chapter take less than a minute to compute, and neither Python nor our implementation is exactly fast. But as the search space grows, automated repair will take much longer.On the other hand, even an incomplete automated repair candidate can be much better than nothing at all – it may provide all the essential ingredients (such as the location or the involved variables) for a successful fix. When users of automated repair techniques are aware of its limitations and its assumptions, there is lots of potential in automated repair. Enjoy! Limitations The `Repairer` class is tested on our example programs, but not much more. Things that do not work include* Functions with inner functions are not repaired. Synopsis This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception. The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)``` Here is a complete example for the `middle()` program. This is the original source code of `middle()`:
###Code
# ignore
print_content(middle_source, '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melse[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:
###Code
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
The repairer is instantiated with the debugger used (`middle_debugger`):
###Code
middle_repairer = Repairer(middle_debugger)
###Output
_____no_output_____
###Markdown
The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).
###Code
tree, fitness = middle_repairer.repair()
###Output
_____no_output_____
###Markdown
The returned AST `tree` can be output via `ast.unparse()`:
###Code
print(ast.unparse(tree))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return x
elif x > y:
return y
elif x > z:
return x
return z
###Markdown
The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful. Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([Repairer, ConditionMutator, CrossoverOperator],
abstract_classes=[
NodeVisitor,
NodeTransformer
],
public_methods=[
Repairer.__init__,
Repairer.repair,
StatementMutator.__init__,
StatementMutator.mutate,
ConditionMutator.__init__,
CrossoverOperator.__init__,
CrossoverOperator.crossover,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Automated repair based on genetic optimization uses five ingredients: 1. A _test suite_ to determine passing and failing tests 2. _Defect localization_ (typically obtained from [statistical debugging](StatisticalDebugger.ipynb) with the test suite) to determine potential locations to be fixed 3. _Random code mutations_ and _crossover operations_ to create and evolve a population of inputs 4. A _fitness function_ and a _selection strategy_ to determine the part of the population that should be evolved further 5. A _reducer_ such as [delta debugging](DeltaDebugger.ipynb) to simplify the final candidate with the highest fitness.* The result of automated repair is a _fix candidate_ with the highest fitness for the given tests.* A _fix candidate_ is not guaranteed to be correct or optimal, but gives important hints on how to fix the program.* All of the above ingredients offer plenty of settings and alternatives to experiment with. BackgroundThe seminal work in automated repair is [GenProg](https://squareslab.github.io/genprog-code/) \cite{LeGoues2012}, which heavily inspired our `Repairer` implementation. Major differences between GenProg and `Repairer` include:* GenProg includes its own defect localization (which is also dynamically updated), whereas `Repairer` builds on earlier statistical debugging.* GenProg can apply multiple mutations on programs (or none at all), whereas `Repairer` applies exactly one mutation.* The `StatementMutator` used by `Repairer` includes various special cases for program structures (`if`, `for`, `while`...), whereas GenProg operates on statements only.* GenProg has been tested on large production programs.While GenProg is _the_ seminal work in the area (and arguably the most important software engineering research contribution of the 2010s), there have been a number of important extensions of automated repair. These include:* *AutoFix* \cite{Pei2014} leverages _program contracts_ (pre- and postconditions) to generate tests and assertions automatically. Not only do such [assertions](Assertions.ipynb) help in fault localization, they also allow for much better validation of fix candidates.* *SemFix* \cite{Nguyen2013} and its successor *[Angelix](http://angelix.io)* \cite{Mechtaev2016}introduce automated program repair based on _symbolic analysis_ rather than genetic optimization. This allows to leverage program semantics, which GenProg does not consider.To learn more about automated program repair, see [program-repair.org](http://program-repair.org), the community page dedicated to research in program repair. Exercises Exercise 1: Automated Repair ParametersAutomated Repair is influenced by a large number of design choices – the size of the population, the number of iterations, the genetic optimization strategy, and more. How do changes to these design choices affect its effectiveness? * Consider the constants defined in this chapter (such as `POPULATION_SIZE` or `WEIGHT_PASSING` vs. `WEIGHT_FAILING`). How do changes affect the effectiveness of automated repair?* As an effectiveness metric, consider the number of iterations it takes to produce a fix candidate.* Since genetic optimization is a random algorithm, you need to determine effectiveness averages over a large number of runs (say, 100). Exercise 2: Elitism[_Elitism_](https://en.wikipedia.org/wiki/Genetic_algorithmElitism) (also known as _elitist selection_) is a variant of genetic selection in which a small fraction of the fittest candidates of the last population are included unchanged in the offspring.* Implement elitist selection by subclassing the `evolve()` method. Experiment with various fractions (5%, 10%, 25%) of "elites" and see how this improves results. Exercise 3: Evolving ValuesFollowing the steps of `ConditionMutator`, implement a `ValueMutator` class that replaces one constant value by another one found in the source (say, `0` by `1` or `True` by `False`).For validation, consider the following failure in the `square_root()` function from the [chapter on assertions](Assertions.ipynb):
###Code
from Assertions import square_root # minor dependency
with ExpectError():
square_root_of_zero = square_root(0)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_62204/1107282428.py", line 2, in <module>
square_root_of_zero = square_root(0)
File "/Users/zeller/Projects/debuggingbook/notebooks/Assertions.ipynb", line 61, in square_root
guess = (approx + x / approx) / 2
ZeroDivisionError: float division by zero (expected)
###Markdown
Can your `ValueMutator` automatically fix this failure? **Solution.** Your solution will be effective if it also includes named constants such as `None`.
###Code
import math
def square_root_fixed(x): # type: ignore
assert x >= 0 # precondition
approx = 0 # <-- FIX: Change `None` to 0
guess = x / 2
while approx != guess:
approx = guess
guess = (approx + x / approx) / 2
assert math.isclose(approx * approx, x)
return approx
square_root_fixed(0)
###Output
_____no_output_____
###Markdown
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("UJTf7cW0idI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in the [chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb).
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer is instantiated with the debugger used (`middle_debugger`):```python>>> middle_repairer = Repairer(middle_debugger)```The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).```python>>> tree, fitness = middle_repairer.repair()```The returned AST `tree` can be output via `ast.unparse()`:```python>>> print(ast.unparse(tree))def middle(x, y, z): if y < z: if x < y: return y elif x < z: return x elif x > y: return y elif x > z: return x return z```The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.```python>>> fitness1.0```Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful.Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e., how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:
###Code
from StatisticalDebugger import middle
# ignore
from bookutils import print_content
# ignore
import inspect
# ignore
_, first_lineno = inspect.getsourcelines(middle)
middle_source = inspect.getsource(middle)
print_content(middle_source, '.py', start_line_number=first_lineno)
###Output
708 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
709 [34mif[39;49;00m y < z:
710 [34mif[39;49;00m x < y:
711 [34mreturn[39;49;00m y
712 [34melif[39;49;00m x < z:
713 [34mreturn[39;49;00m y
714 [34melse[39;49;00m:
715 [34mif[39;49;00m x > y:
716 [34mreturn[39;49;00m y
717 [34melif[39;49;00m x > z:
718 [34mreturn[39;49;00m x
719 [34mreturn[39;49;00m z
###Markdown
In most cases, `middle()` just runs fine:
###Code
middle(4, 5, 6)
###Output
_____no_output_____
###Markdown
In some other cases, though, it does not work correctly:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:
###Code
def middle_sort_of_fixed(x, y, z): # type: ignore
return x
###Output
_____no_output_____
###Markdown
You will concur that the failure no longer occurs:
###Code
middle_sort_of_fixed(2, 1, 3)
###Output
_____no_output_____
###Markdown
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The common approach for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks. Note that running the test suite commonly takes the most time of automated repair, so a large test suite also comes with extra cost. Let us first focus on achieving high-quality repairs. Hence, we will use the extensive test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):
###Code
from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES
###Output
_____no_output_____
###Markdown
The `middle_test()` function fails whenever `middle()` returns an incorrect result:
###Code
def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
from ExpectError import ExpectError
with ExpectError():
middle_test(2, 1, 3)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14031/3661663124.py", line 2, in <module>
middle_test(2, 1, 3)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14031/40742806.py", line 3, in middle_test
assert m == sorted([x, y, z])[1]
AssertionError (expected)
###Markdown
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
###Code
from StatisticalDebugger import OchiaiDebugger, RankingDebugger
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
We see that the upper half of the `middle()` code is definitely more suspicious:
###Code
middle_debugger
###Output
_____no_output_____
###Markdown
The most suspicious line is:
###Code
# ignore
location = middle_debugger.rank()[0]
(func_name, lineno) = location
lines, first_lineno = inspect.getsourcelines(middle)
print(lineno, end="")
print_content(lines[lineno - first_lineno], '.py')
###Output
713 [34mreturn[39;49;00m y
###Markdown
with a suspiciousness of:
###Code
# ignore
middle_debugger.suspiciousness(location)
###Output
_____no_output_____
###Markdown
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations:
###Code
import string
string.ascii_letters
len(string.ascii_letters + '_') * \
len(string.ascii_letters + '_' + string.digits) * \
len(string.ascii_letters + '_' + string.digits)
###Output
_____no_output_____
###Markdown
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. This insight has been dubbed the *plastic surgery hypothesis*: content of new code can often be assembled out of fragments of code that already exist in the code base \citeBarr2014}. For our "plastic surgery", we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function.
###Code
import ast
import inspect
from bookutils import print_content, show_ast
def middle_tree() -> ast.AST:
return ast.parse(inspect.getsource(middle))
show_ast(middle_tree())
###Output
_____no_output_____
###Markdown
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST.
###Code
print(ast.dump(middle_tree()))
###Output
Module(body=[FunctionDef(name='middle', args=arguments(posonlyargs=[], args=[arg(arg='x'), arg(arg='y'), arg(arg='z')], kwonlyargs=[], kw_defaults=[], defaults=[]), body=[If(test=Compare(left=Name(id='y', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[])])], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='x', ctx=Load()))], orelse=[])])]), Return(value=Name(id='z', ctx=Load()))], decorator_list=[])], type_ignores=[])
###Markdown
This is the path to the first `return` statement:
###Code
ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore
###Output
_____no_output_____
###Markdown
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast).
###Code
from ast import NodeVisitor
# ignore
from typing import Any, Callable, Optional, Type, Tuple
from typing import Dict, Union, Set, List, cast
class StatementVisitor(NodeVisitor):
"""Visit all statements within function defs in an AST"""
def __init__(self) -> None:
self.statements: List[Tuple[ast.AST, str]] = []
self.func_name = ""
self.statements_seen: Set[Tuple[ast.AST, str]] = set()
super().__init__()
def add_statements(self, node: ast.AST, attr: str) -> None:
elems: List[ast.AST] = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems] # type: ignore
for elem in elems:
stmt = (elem, self.func_name)
if stmt in self.statements_seen:
continue
self.statements.append(stmt)
self.statements_seen.add(stmt)
def visit_node(self, node: ast.AST) -> None:
# Any node other than the ones listed below
self.add_statements(node, 'body')
self.add_statements(node, 'orelse')
def visit_Module(self, node: ast.Module) -> None:
# Module children are defs, classes and globals - don't add
super().generic_visit(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
# Class children are defs and globals - don't add
super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> None:
self.visit_node(node)
super().generic_visit(node)
def visit_FunctionDef(self,
node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None:
if not self.func_name:
self.func_name = node.name
self.visit_node(node)
super().generic_visit(node)
self.func_name = ""
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
return self.visit_FunctionDef(node)
###Output
_____no_output_____
###Markdown
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class.
###Code
def all_statements_and_functions(tree: ast.AST,
tp: Optional[Type] = None) -> \
List[Tuple[ast.AST, str]]:
"""
Return a list of pairs (`statement`, `function`) for all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
visitor = StatementVisitor()
visitor.visit(tree)
statements = visitor.statements
if tp is not None:
statements = [s for s in statements if isinstance(s[0], tp)]
return statements
def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]:
"""
Return a list of all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]
###Output
_____no_output_____
###Markdown
Here are all the `return` statements in `middle()`:
###Code
all_statements(middle_tree(), ast.Return)
all_statements_and_functions(middle_tree(), ast.If)
###Output
_____no_output_____
###Markdown
We can randomly pick an element:
###Code
import random
random_node = random.choice(all_statements(middle_tree()))
ast.unparse(random_node)
###Output
_____no_output_____
###Markdown
Mutating StatementsThe main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). The constructor provides various keyword arguments to configure the mutator.
###Code
from ast import NodeTransformer
import copy
class StatementMutator(NodeTransformer):
"""Mutate statements in an AST for automated repair."""
def __init__(self,
suspiciousness_func:
Optional[Callable[[Tuple[Callable, int]], float]] = None,
source: Optional[List[ast.AST]] = None,
log: bool = False) -> None:
"""
Constructor.
`suspiciousness_func` is a function that takes a location
(function, line_number) and returns a suspiciousness value
between 0 and 1.0. If not given, all locations get the same
suspiciousness of 1.0.
`source` is a list of statements to choose from.
"""
super().__init__()
self.log = log
if suspiciousness_func is None:
def suspiciousness_func(location: Tuple[Callable, int]) -> float:
return 1.0
assert suspiciousness_func is not None
self.suspiciousness_func: Callable = suspiciousness_func
if source is None:
source = []
self.source = source
if self.log > 1:
for i, node in enumerate(self.source):
print(f"Source for repairs #{i}:")
print_content(ast.unparse(node), '.py')
print()
print()
self.mutations = 0
###Output
_____no_output_____
###Markdown
Choosing Suspicious Statements to MutateWe start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization.
###Code
import warnings
class StatementMutator(StatementMutator):
def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float:
if not hasattr(stmt, 'lineno'):
warnings.warn(f"{self.format_node(stmt)}: Expected line number")
return 0.0
suspiciousness = self.suspiciousness_func((func_name, stmt.lineno))
if suspiciousness is None: # not executed
return 0.0
return suspiciousness
def format_node(self, node: ast.AST) -> str:
...
###Output
_____no_output_____
###Markdown
The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen.
###Code
class StatementMutator(StatementMutator):
def node_to_be_mutated(self, tree: ast.AST) -> ast.AST:
statements = all_statements_and_functions(tree)
assert len(statements) > 0, "No statements"
weights = [self.node_suspiciousness(stmt, func_name)
for stmt, func_name in statements]
stmts = [stmt for stmt, func_name in statements]
if self.log > 1:
print("Weights:")
for i, stmt in enumerate(statements):
node, func_name = stmt
print(f"{weights[i]:.2} {self.format_node(node)}")
if sum(weights) == 0.0:
# No suspicious line
return random.choice(stmts)
else:
return random.choices(stmts, weights=weights)[0]
###Output
_____no_output_____
###Markdown
Choosing a Mutation Method The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node.According to the rules of `NodeTransformer`, the mutation method can return* a new node or a list of nodes, replacing the current node;* `None`, deleting it; or* the node itself, keeping things as they are.
###Code
import re
RE_SPACE = re.compile(r'[ \t\n]+')
class StatementMutator(StatementMutator):
def choose_op(self) -> Callable:
return random.choice([self.insert, self.swap, self.delete])
def visit(self, node: ast.AST) -> ast.AST:
super().visit(node) # Visits (and transforms?) children
if not node.mutate_me: # type: ignore
return node
op = self.choose_op()
new_node = op(node)
self.mutations += 1
if self.log:
print(f"{node.lineno:4}:{op.__name__ + ':':7} "
f"{self.format_node(node)} "
f"becomes {self.format_node(new_node)}")
return new_node
###Output
_____no_output_____
###Markdown
Swapping StatementsOur first mutator is `swap()`, which replaces the current node `NODE` by a random node found in `source` (using a newly defined `choose_statement()`).As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```pythonif P: BODY```we thus only insert ```pythonif P: pass```since the statements in `BODY` have a later chance to get inserted. The same holds for all constructs that have a `BODY`, i.e. `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def choose_statement(self) -> ast.AST:
return copy.deepcopy(random.choice(self.source))
class StatementMutator(StatementMutator):
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` with a random node from `source`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt):
# The source `if P: X` is added as `if P: pass`
if hasattr(new_node, 'body'):
new_node.body = [ast.Pass()] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Inserting StatementsOur next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node `NODE`. (If `NODE` is a `return` statement, then we insert the new node _before_ `NODE`.)If the statement to be inserted has the form```pythonif P: BODY```we only insert the "header" of the `if`, resulting in```pythonif P: NODE```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]:
"""Insert a random node from `source` after `node`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'):
# Inserting `if P: X` as `if P:`
new_node.body = [node] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
# Only insert before `return`, not after it
if isinstance(node, ast.Return):
if isinstance(new_node, ast.Return):
return new_node
else:
return [new_node, node]
return [node, new_node]
###Output
_____no_output_____
###Markdown
Deleting StatementsOur last mutator is `delete()`, which deletes the current node `NODE`. The standard case is to replace `NODE` by a `pass` statement.If the statement to be deleted has the form```pythonif P: BODY```we only delete the "header" of the `if`, resulting in```pythonBODY```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more. If the statement to be deleted has multiple branches, a random branch is chosen (e.g., the `else` branch of an `if` statement).
###Code
class StatementMutator(StatementMutator):
def delete(self, node: ast.AST) -> None:
"""Delete `node`."""
branches = [attr for attr in ['body', 'orelse', 'finalbody']
if hasattr(node, attr) and getattr(node, attr)]
if branches:
# Replace `if P: S` by `S`
branch = random.choice(branches)
new_node = getattr(node, branch)
return new_node
if isinstance(node, ast.stmt):
# Avoid empty bodies; make this a `pass` statement
new_node = ast.Pass()
ast.copy_location(new_node, node)
return new_node
return None # Just delete
from bookutils import quiz
quiz("Why are statements replaced by `pass` rather than deleted?",
[
"Because `if P: pass` is valid Python, while `if P:` is not",
"Because in Python, bodies for `if`, `while`, etc. cannot be empty",
"Because a `pass` node makes a target for future mutations",
"Because it causes the tests to pass"
], '[3 ^ n for n in range(3)]')
###Output
_____no_output_____
###Markdown
Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further. HelpersFor logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node.
###Code
class StatementMutator(StatementMutator):
NODE_MAX_LENGTH = 20
def format_node(self, node: ast.AST) -> str:
"""Return a string representation for `node`."""
if node is None:
return "None"
if isinstance(node, list):
return "; ".join(self.format_node(elem) for elem in node)
s = RE_SPACE.sub(' ', ast.unparse(node)).strip()
if len(s) > self.NODE_MAX_LENGTH - len("..."):
s = s[:self.NODE_MAX_LENGTH] + "..."
return repr(s)
###Output
_____no_output_____
###Markdown
All TogetherLet us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation.
###Code
class StatementMutator(StatementMutator):
def mutate(self, tree: ast.AST) -> ast.AST:
"""Mutate the given AST `tree` in place. Return mutated tree."""
assert isinstance(tree, ast.AST)
tree = copy.deepcopy(tree)
if not self.source:
self.source = all_statements(tree)
for node in ast.walk(tree):
node.mutate_me = False # type: ignore
node = self.node_to_be_mutated(tree)
node.mutate_me = True # type: ignore
self.mutations = 0
tree = self.visit(tree)
if self.mutations == 0:
warnings.warn("No mutations found")
ast.fix_missing_locations(tree)
return tree
###Output
_____no_output_____
###Markdown
Here are a number of transformations applied by `StatementMutator`:
###Code
mutator = StatementMutator(log=True)
for i in range(10):
new_tree = mutator.mutate(middle_tree())
###Output
9:insert: 'return y' becomes 'return y'
8:insert: 'if x > y: return y e...' becomes 'if x < y: if x > y: ...'
12:insert: 'return z' becomes 'if y < z: return z...'
3:swap: 'if x < y: return y e...' becomes 'return x'
3:swap: 'if x < y: return y e...' becomes 'return z'
3:swap: 'if x < y: return y e...' becomes 'return x'
11:swap: 'return x' becomes 'return y'
10:insert: 'if x > z: return x...' becomes 'if x > z: return x...'; 'return z'
12:delete: 'return z' becomes 'pass'
8:swap: 'if x > y: return y e...' becomes 'if y < z: pass'
###Markdown
This is the effect of the last mutator applied on `middle`:
###Code
print_content(ast.unparse(new_tree), '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m y < z:
[34mpass[39;49;00m
[34mreturn[39;49;00m z
###Markdown
FitnessNow that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate. Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests.
###Code
WEIGHT_PASSING = 0.99
WEIGHT_FAILING = 0.01
def middle_fitness(tree: ast.AST) -> float:
"""Compute fitness of a `middle()` candidate given in `tree`"""
original_middle = middle
try:
code = compile(tree, '<fitness>', 'exec')
except ValueError:
return 0 # Compilation error
exec(code, globals())
passing_passed = 0
failing_passed = 0
# Test how many of the passing runs pass
for x, y, z in MIDDLE_PASSING_TESTCASES:
try:
middle_test(x, y, z)
passing_passed += 1
except AssertionError:
pass
passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES)
# Test how many of the failing runs pass
for x, y, z in MIDDLE_FAILING_TESTCASES:
try:
middle_test(x, y, z)
failing_passed += 1
except AssertionError:
pass
failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES)
fitness = (WEIGHT_PASSING * passing_ratio +
WEIGHT_FAILING * failing_ratio)
globals()['middle'] = original_middle
return fitness
###Output
_____no_output_____
###Markdown
Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones).
###Code
middle_fitness(middle_tree())
###Output
_____no_output_____
###Markdown
Our "sort of fixed" version of `middle()` gets a much lower fitness:
###Code
middle_fitness(ast.parse("def middle(x, y, z): return x"))
###Output
_____no_output_____
###Markdown
In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)
###Code
from StatisticalDebugger import middle_fixed
middle_fixed_source = \
inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip()
middle_fitness(ast.parse(middle_fixed_source))
###Output
_____no_output_____
###Markdown
PopulationWe now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012}).
###Code
POPULATION_SIZE = 40
middle_mutator = StatementMutator()
MIDDLE_POPULATION = [middle_tree()] + \
[middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]
###Output
_____no_output_____
###Markdown
We sort the fix candidates according to their fitness. This actually runs all tests on all candidates.
###Code
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
###Output
_____no_output_____
###Markdown
The candidate with the highest fitness is still our original (faulty) `middle()` code:
###Code
print(ast.unparse(MIDDLE_POPULATION[0]),
middle_fitness(MIDDLE_POPULATION[0]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
elif x > y:
return y
elif x > z:
return x
return z 0.99
###Markdown
At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:
###Code
print(ast.unparse(MIDDLE_POPULATION[-1]),
middle_fitness(MIDDLE_POPULATION[-1]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
else:
return y
return z 0.5445
###Markdown
EvolutionTo evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates.
###Code
def evolve_middle() -> None:
global MIDDLE_POPULATION
source = all_statements(middle_tree())
mutator = StatementMutator(source=source)
n = len(MIDDLE_POPULATION)
offspring: List[ast.AST] = []
while len(offspring) < n:
parent = random.choice(MIDDLE_POPULATION)
offspring.append(mutator.mutate(parent))
MIDDLE_POPULATION += offspring
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
MIDDLE_POPULATION = MIDDLE_POPULATION[:n]
###Output
_____no_output_____
###Markdown
This is what happens when evolving our population for the first time; the original source is still our best candidate.
###Code
evolve_middle()
tree = MIDDLE_POPULATION[0]
print(ast.unparse(tree), middle_fitness(tree))
# docassert
assert middle_fitness(tree) < 1.0
###Output
_____no_output_____
###Markdown
However, nothing keeps us from evolving for a few generations more...
###Code
for i in range(50):
evolve_middle()
best_middle_tree = MIDDLE_POPULATION[0]
fitness = middle_fitness(best_middle_tree)
print(f"\rIteration {i:2}: fitness = {fitness} ", end="")
if fitness >= 1.0:
break
# docassert
assert middle_fitness(best_middle_tree) >= 1.0
###Output
_____no_output_____
###Markdown
Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:
###Code
print_content(ast.unparse(best_middle_tree), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mif[39;49;00m x < z:
5 [34mreturn[39;49;00m y
6 [34melif[39;49;00m x < z:
7 [34mreturn[39;49;00m x
8 [34melif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melse[39;49;00m:
11 [34mif[39;49;00m x > z:
12 [34mreturn[39;49;00m x
13 [34mreturn[39;49;00m z
14 [34mreturn[39;49;00m z
###Markdown
... and yes, it passes all tests:
###Code
original_middle = middle
code = compile(best_middle_tree, '<string>', 'exec')
exec(code, globals())
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
middle_test(x, y, z)
middle = original_middle
###Output
_____no_output_____
###Markdown
As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix. However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements.
###Code
quiz("Some of the lines in our fix candidate are redundant. "
"Which are these?",
[
"Line 3: `if x < y:`",
"Line 4: `if x < z:`",
"Line 5: `return y`",
"Line 13: `return z`"
], '[eval(chr(100 - x)) for x in [48, 50]]')
###Output
_____no_output_____
###Markdown
Simplifying As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements. The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
from DeltaDebugger import DeltaDebugger
middle_lines = ast.unparse(best_middle_tree).strip().split('\n')
def test_middle_lines(lines: List[str]) -> None:
source = "\n".join(lines)
tree = ast.parse(source)
assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0
with DeltaDebugger() as dd:
test_middle_lines(middle_lines)
reduced_lines = dd.min_args()['lines']
reduced_source = "\n".join(reduced_lines)
repaired_source = ast.unparse(ast.parse(reduced_source)) # normalize
print_content(repaired_source, '.py')
# docassert
assert len(reduced_lines) < len(middle_lines)
###Output
_____no_output_____
###Markdown
Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:
###Code
original_source = ast.unparse(ast.parse(middle_source)) # normalize
from ChangeDebugger import diff, print_patch # minor dependency
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m87[39;49;00m,[34m37[39;49;00m +[34m87[39;49;00m,[34m37[39;49;00m @@
x < z:
- [34mreturn[39;49;00m y
+ [34mreturn[39;49;00m x
[34melif[39;49;00m
###Markdown
We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code. CrossoverSo far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" children, we pick a _crossover point_ and exchange the strands at this very point: We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as```pythoncrossover = CrossoverOperator()crossover.crossover(tree_p1, tree_p2)```where `tree_p1` and `tree_p2` are two ASTs that are changed in place. Excursion: Implementing Crossover Crossing Statement Lists Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:
###Code
def p1(): # type: ignore
a = 1
b = 2
c = 3
def p2(): # type: ignore
x = 1
y = 2
z = 3
###Output
_____no_output_____
###Markdown
Then a crossover operation would produce one child with a body```pythona = 1y = 2z = 3```and another child with a body```pythonx = 1b = 2c = 3``` We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`.
###Code
class CrossoverOperator:
"""A class for performing statement crossover of Python programs"""
def __init__(self, log: bool = False):
"""Constructor. If `log` is set, turn on logging."""
self.log = log
def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \
Tuple[List[ast.AST], List[ast.AST]]:
"""Crossover the statement lists `body_1` x `body_2`. Return new lists."""
assert isinstance(body_1, list)
assert isinstance(body_2, list)
crossover_point_1 = len(body_1) // 2
crossover_point_2 = len(body_2) // 2
return (body_1[:crossover_point_1] + body_2[crossover_point_2:],
body_2[:crossover_point_2] + body_1[crossover_point_1:])
###Output
_____no_output_____
###Markdown
Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:
###Code
tree_p1: ast.Module = ast.parse(inspect.getsource(p1))
tree_p2: ast.Module = ast.parse(inspect.getsource(p2))
body_p1 = tree_p1.body[0].body # type: ignore
body_p2 = tree_p2.body[0].body # type: ignore
body_p1
crosser = CrossoverOperator()
tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
x = [34m1[39;49;00m
b = [34m2[39;49;00m
c = [34m3[39;49;00m
###Markdown
Applying Crossover on ProgramsApplying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program functionality, other than introducing errors due to dependencies.
###Code
class CrossoverOperator(CrossoverOperator):
# In modules and class defs, the ordering of elements does not matter (much)
SKIP_LIST = {ast.Module, ast.ClassDef}
def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool:
if any(isinstance(tree, cls) for cls in self.SKIP_LIST):
return False
body = getattr(tree, body_attr, [])
return body is not None and len(body) >= 2
###Output
_____no_output_____
###Markdown
Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes two ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.`) and $l_2$ (from `t2.`).If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise* If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$.* Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs.`crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool:
"""
Crossover the bodies `body_attr` of two trees `t1` and `t2`.
Return True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
assert isinstance(body_attr, str)
if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None):
return False
if self.crossover_branches(t1, t2):
return True
if self.log > 1:
print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}")
body_1 = getattr(t1, body_attr)
body_2 = getattr(t2, body_attr)
# If both trees have the attribute, we can cross their bodies
if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr):
if self.log:
print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}")
new_body_1, new_body_2 = self.cross_bodies(body_1, body_2)
setattr(t1, body_attr, new_body_1)
setattr(t2, body_attr, new_body_2)
return True
# Strategy 1: Find matches in class/function of same name
for child_1 in body_1:
if hasattr(child_1, 'name'):
for child_2 in body_2:
if (hasattr(child_2, 'name') and
child_1.name == child_2.name):
if self.crossover_attr(child_1, child_2, body_attr):
return True
# Strategy 2: Find matches anywhere
for child_1 in random.sample(body_1, len(body_1)):
for child_2 in random.sample(body_2, len(body_2)):
if self.crossover_attr(child_1, child_2, body_attr):
return True
return False
###Output
_____no_output_____
###Markdown
We have a special case for `if` nodes, where we can cross their body and `else` branches. (In Python, `for` and `while` also have `else` branches, but swapping these with loop bodies is likely to create havoc.)
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool:
"""Special case:
`t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'`
becomes
`t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1`
Returns True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and
hasattr(t2, 'body') and hasattr(t2, 'orelse')):
t1 = cast(ast.If, t1) # keep mypy happy
t2 = cast(ast.If, t2)
if self.log:
print(f"Crossing branches {t1} x {t2}")
t1.body, t1.orelse, t2.body, t2.orelse = \
t2.orelse, t2.body, t1.orelse, t1.body
return True
return False
###Output
_____no_output_____
###Markdown
The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]:
"""Do a crossover of ASTs `t1` and `t2`.
Raises `CrossoverError` if no crossover is found."""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
for body_attr in ['body', 'orelse', 'finalbody']:
if self.crossover_attr(t1, t2, body_attr):
return t1, t2
raise CrossoverError("No crossover found")
class CrossoverError(ValueError):
pass
###Output
_____no_output_____
###Markdown
End of Excursion Crossover in Action Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:
###Code
def p1(): # type: ignore
if True:
print(1)
print(2)
print(3)
def p2(): # type: ignore
if True:
print(a)
print(b)
else:
print(c)
print(d)
###Output
_____no_output_____
###Markdown
We invoke the `crossover()` method with two ASTs from `p1` and `p2`:
###Code
crossover = CrossoverOperator()
tree_p1 = ast.parse(inspect.getsource(p1))
tree_p2 = ast.parse(inspect.getsource(p2))
crossover.crossover(tree_p1, tree_p2);
###Output
_____no_output_____
###Markdown
Here is the crossed offspring, mixing statement lists of `p1` and `p2`:
###Code
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34melse[39;49;00m:
[36mprint[39;49;00m([34m1[39;49;00m)
[36mprint[39;49;00m([34m2[39;49;00m)
[36mprint[39;49;00m([34m3[39;49;00m)
###Markdown
Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`.
###Code
middle_t1, middle_t2 = crossover.crossover(middle_tree(),
ast.parse(inspect.getsource(p2)))
###Output
_____no_output_____
###Markdown
We see how the resulting offspring encompasses elements of both sources:
###Code
print_content(ast.unparse(middle_t1), '.py')
print_content(ast.unparse(middle_t2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34melif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
###Markdown
A Repairer ClassSo far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate:```pythondebugger = OchiaiDebugger()with debugger: with debugger: ...repairer = Repairer(debugger)repairer.repair()``` Excursion: Implementing Repairer The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below.
###Code
from StackInspector import StackInspector # minor dependency
class Repairer(StackInspector):
"""A class for automatic repair of Python programs"""
def __init__(self, debugger: RankingDebugger, *,
targets: Optional[List[Any]] = None,
sources: Optional[List[Any]] = None,
log: Union[bool, int] = False,
mutator_class: Type = StatementMutator,
crossover_class: Type = CrossoverOperator,
reducer_class: Type = DeltaDebugger,
globals: Optional[Dict[str, Any]] = None):
"""Constructor.
`debugger`: a `RankingDebugger` to take tests and coverage from.
`targets`: a list of functions/modules to be repaired.
(default: the covered functions in `debugger`, except tests)
`sources`: a list of functions/modules to take repairs from.
(default: same as `targets`)
`globals`: if given, a `globals()` dict for executing targets
(default: `globals()` of caller)"""
assert isinstance(debugger, RankingDebugger)
self.debugger = debugger
self.log = log
if targets is None:
targets = self.default_functions()
if not targets:
raise ValueError("No targets to repair")
if sources is None:
sources = self.default_functions()
if not sources:
raise ValueError("No sources to take repairs from")
if self.debugger.function() is None:
raise ValueError("Multiple entry points observed")
self.target_tree: ast.AST = self.parse(targets)
self.source_tree: ast.AST = self.parse(sources)
self.log_tree("Target code to be repaired:", self.target_tree)
if ast.dump(self.target_tree) != ast.dump(self.source_tree):
self.log_tree("Source code to take repairs from:",
self.source_tree)
self.fitness_cache: Dict[str, float] = {}
self.mutator: StatementMutator = \
mutator_class(
source=all_statements(self.source_tree),
suspiciousness_func=self.debugger.suspiciousness,
log=(self.log >= 3))
self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3))
self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3))
if globals is None:
globals = self.caller_globals() # see below
self.globals = globals
###Output
_____no_output_____
###Markdown
When we access or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`. Helper FunctionsThe constructor uses a number of helper functions to create its environment.
###Code
class Repairer(Repairer):
def getsource(self, item: Union[str, Any]) -> str:
"""Get the source for `item`. Can also be a string."""
if isinstance(item, str):
item = self.globals[item]
return inspect.getsource(item)
class Repairer(Repairer):
def default_functions(self) -> List[Callable]:
"""Return the set of functions to be repaired.
Functions whose names start or end in `test` are excluded."""
def is_test(name: str) -> bool:
return name.startswith('test') or name.endswith('test')
return [func for func in self.debugger.covered_functions()
if not is_test(func.__name__)]
class Repairer(Repairer):
def log_tree(self, description: str, tree: Any) -> None:
"""Print out `tree` as source code prefixed by `description`."""
if self.log:
print(description)
print_content(ast.unparse(tree), '.py')
print()
print()
class Repairer(Repairer):
def parse(self, items: List[Any]) -> ast.AST:
"""Read in a list of items into a single tree"""
tree = ast.parse("")
for item in items:
if isinstance(item, str):
item = self.globals[item]
item_lines, item_first_lineno = inspect.getsourcelines(item)
try:
item_tree = ast.parse("".join(item_lines))
except IndentationError:
# inner function or likewise
warnings.warn(f"Can't parse {item.__name__}")
continue
ast.increment_lineno(item_tree, item_first_lineno - 1)
tree.body += item_tree.body
return tree
###Output
_____no_output_____
###Markdown
Running TestsNow that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected.
###Code
class Repairer(Repairer):
def run_test_set(self, test_set: str, validate: bool = False) -> int:
"""
Run given `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
If `validate` is set, check expectations.
Return number of passed tests.
"""
passed = 0
collectors = self.debugger.collectors[test_set]
function = self.debugger.function()
assert function is not None
# FIXME: function may have been redefined
for c in collectors:
if self.log >= 4:
print(f"Testing {c.id()}...", end="")
try:
function(**c.args())
except Exception as err:
if self.log >= 4:
print(f"failed ({err.__class__.__name__})")
if validate and test_set == self.debugger.PASS:
raise err.__class__(
f"{c.id()} should have passed, but failed")
continue
passed += 1
if self.log >= 4:
print("passed")
if validate and test_set == self.debugger.FAIL:
raise FailureNotReproducedError(
f"{c.id()} should have failed, but passed")
return passed
class FailureNotReproducedError(ValueError):
pass
###Output
_____no_output_____
###Markdown
Here is how we use `run_tests_set()`:
###Code
repairer = Repairer(middle_debugger)
assert repairer.run_test_set(middle_debugger.PASS) == \
len(MIDDLE_PASSING_TESTCASES)
assert repairer.run_test_set(middle_debugger.FAIL) == 0
###Output
_____no_output_____
###Markdown
The method `run_tests()` runs passing and failing tests, weighing the passed test cases to obtain the overall fitness.
###Code
class Repairer(Repairer):
def weight(self, test_set: str) -> float:
"""
Return the weight of `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
"""
return {
self.debugger.PASS: WEIGHT_PASSING,
self.debugger.FAIL: WEIGHT_FAILING
}[test_set]
def run_tests(self, validate: bool = False) -> float:
"""Run passing and failing tests, returning weighted fitness."""
fitness = 0.0
for test_set in [self.debugger.PASS, self.debugger.FAIL]:
passed = self.run_test_set(test_set, validate=validate)
ratio = passed / len(self.debugger.collectors[test_set])
fitness += self.weight(test_set) * ratio
return fitness
###Output
_____no_output_____
###Markdown
The method `validate()` ensures the observed tests can be adequately reproduced.
###Code
class Repairer(Repairer):
def validate(self) -> None:
fitness = self.run_tests(validate=True)
assert fitness == self.weight(self.debugger.PASS)
repairer = Repairer(middle_debugger)
repairer.validate()
###Output
_____no_output_____
###Markdown
(Re)defining FunctionsOur `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness.
###Code
class Repairer(Repairer):
def fitness(self, tree: ast.AST) -> float:
"""Test `tree`, returning its fitness"""
key = cast(str, ast.dump(tree))
if key in self.fitness_cache:
return self.fitness_cache[key]
# Save defs
original_defs: Dict[str, Any] = {}
for name in self.toplevel_defs(tree):
if name in self.globals:
original_defs[name] = self.globals[name]
else:
warnings.warn(f"Couldn't find definition of {repr(name)}")
assert original_defs, f"Couldn't find any definition"
if self.log >= 3:
print("Repair candidate:")
print_content(ast.unparse(tree), '.py')
print()
# Create new definition
try:
code = compile(tree, '<Repairer>', 'exec')
except ValueError: # Compilation error
code = None
if code is None:
if self.log >= 3:
print(f"Fitness = 0.0 (compilation error)")
fitness = 0.0
return fitness
# Execute new code, defining new functions in `self.globals`
exec(code, self.globals)
# Set new definitions in the namespace (`__globals__`)
# of the function we will be calling.
function = self.debugger.function()
assert function is not None
assert hasattr(function, '__globals__')
for name in original_defs:
function.__globals__[name] = self.globals[name] # type: ignore
fitness = self.run_tests(validate=False)
# Restore definitions
for name in original_defs:
function.__globals__[name] = original_defs[name] # type: ignore
self.globals[name] = original_defs[name]
if self.log >= 3:
print(f"Fitness = {fitness}")
self.fitness_cache[key] = fitness
return fitness
###Output
_____no_output_____
###Markdown
The helper function `toplevel_defs()` helps saving and restoring the environment before and after redefining the function under repair.
###Code
class Repairer(Repairer):
def toplevel_defs(self, tree: ast.AST) -> List[str]:
"""Return a list of names of defined functions and classes in `tree`"""
visitor = DefinitionVisitor()
visitor.visit(tree)
assert hasattr(visitor, 'definitions')
return visitor.definitions
class DefinitionVisitor(NodeVisitor):
def __init__(self) -> None:
self.definitions: List[str] = []
def add_definition(self, node: Union[ast.ClassDef,
ast.FunctionDef,
ast.AsyncFunctionDef]) -> None:
self.definitions.append(node.name)
def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
self.add_definition(node)
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
self.add_definition(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
self.add_definition(node)
###Output
_____no_output_____
###Markdown
Here's an example for `fitness()`:
###Code
repairer = Repairer(middle_debugger, log=1)
good_fitness = repairer.fitness(middle_tree())
good_fitness
# docassert
assert good_fitness >= 0.99, "fitness() failed"
bad_middle_tree = ast.parse("def middle(x, y, z): return x")
bad_fitness = repairer.fitness(bad_middle_tree)
bad_fitness
# docassert
assert bad_fitness < 0.5, "fitness() failed"
###Output
_____no_output_____
###Markdown
RepairingNow for the actual `repair()` method, which creates a `population` and then evolves it until the fitness is 1.0 or the given number of iterations is spent.
###Code
import traceback
class Repairer(Repairer):
def initial_population(self, size: int) -> List[ast.AST]:
"""Return an initial population of size `size`"""
return [self.target_tree] + \
[self.mutator.mutate(copy.deepcopy(self.target_tree))
for i in range(size - 1)]
def repair(self, population_size: int = POPULATION_SIZE, iterations: int = 100) -> \
Tuple[ast.AST, float]:
"""
Repair the function we collected test runs from.
Use a population size of `population_size` and
at most `iterations` iterations.
Returns a pair (`ast`, `fitness`) where
`ast` is the AST of the repaired function, and
`fitness` is its fitness (between 0 and 1.0)
"""
self.validate()
population = self.initial_population(population_size)
last_key = ast.dump(self.target_tree)
for iteration in range(iterations):
population = self.evolve(population)
best_tree = population[0]
fitness = self.fitness(best_tree)
if self.log:
print(f"Evolving population: "
f"iteration{iteration:4}/{iterations} "
f"fitness = {fitness:.5} \r", end="")
if self.log >= 2:
best_key = ast.dump(best_tree)
if best_key != last_key:
print()
print()
self.log_tree(f"New best code (fitness = {fitness}):",
best_tree)
last_key = best_key
if fitness >= 1.0:
break
if self.log:
print()
if self.log and self.log < 2:
self.log_tree(f"Best code (fitness = {fitness}):", best_tree)
best_tree = self.reduce(best_tree)
fitness = self.fitness(best_tree)
self.log_tree(f"Reduced code (fitness = {fitness}):", best_tree)
return best_tree, fitness
###Output
_____no_output_____
###Markdown
EvolvingThe evolution of our population takes place in the `evolve()` method. In contrast to the `evolve_middle()` function, above, we use crossover to create the offspring, which we still mutate afterwards.
###Code
class Repairer(Repairer):
def evolve(self, population: List[ast.AST]) -> List[ast.AST]:
"""Evolve the candidate population by mutating and crossover."""
n = len(population)
# Create offspring as crossover of parents
offspring: List[ast.AST] = []
while len(offspring) < n:
parent_1 = copy.deepcopy(random.choice(population))
parent_2 = copy.deepcopy(random.choice(population))
try:
self.crossover.crossover(parent_1, parent_2)
except CrossoverError:
pass # Just keep parents
offspring += [parent_1, parent_2]
# Mutate offspring
offspring = [self.mutator.mutate(tree) for tree in offspring]
# Add it to population
population += offspring
# Keep the fitter part of the population
population.sort(key=self.fitness_key, reverse=True)
population = population[:n]
return population
###Output
_____no_output_____
###Markdown
A second difference is that we not only sort by fitness, but also by tree size – with equal fitness, a smaller tree thus will be favored. This helps keeping fixes and patches small.
###Code
class Repairer(Repairer):
def fitness_key(self, tree: ast.AST) -> Tuple[float, int]:
"""Key to be used for sorting the population"""
tree_size = len([node for node in ast.walk(tree)])
return (self.fitness(tree), -tree_size)
###Output
_____no_output_____
###Markdown
SimplifyingThe last step in repairing is simplifying the code. As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of superfluous statements. To this end, we convert the tree to lines, run delta debugging on them, and then convert it back to a tree.
###Code
class Repairer(Repairer):
def reduce(self, tree: ast.AST) -> ast.AST:
"""Simplify `tree` using delta debugging."""
original_fitness = self.fitness(tree)
source_lines = ast.unparse(tree).split('\n')
with self.reducer:
self.test_reduce(source_lines, original_fitness)
reduced_lines = self.reducer.min_args()['source_lines']
reduced_source = "\n".join(reduced_lines)
return ast.parse(reduced_source)
###Output
_____no_output_____
###Markdown
As dicussed above, we simplify the code by having the test function (`test_reduce()`) declare reaching the maximum fitness obtained so far as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
class Repairer(Repairer):
def test_reduce(self, source_lines: List[str], original_fitness: float) -> None:
"""Test function for delta debugging."""
try:
source = "\n".join(source_lines)
tree = ast.parse(source)
fitness = self.fitness(tree)
assert fitness < original_fitness
except AssertionError:
raise
except SyntaxError:
raise
except IndentationError:
raise
except Exception:
# traceback.print_exc() # Uncomment to see internal errors
raise
###Output
_____no_output_____
###Markdown
End of Excursion Repairer in ActionLet us go and apply `Repairer` in practice. We initialize it with `middle_debugger`, which has (still) collected the passing and failing runs for `middle_test()`. We also set `log` for some diagnostics along the way.
###Code
repairer = Repairer(middle_debugger, log=True)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We now invoke `repair()` to evolve our population. After a few iterations, we find a best tree with perfect fitness.
###Code
best_tree, fitness = repairer.repair()
print_content(ast.unparse(best_tree), '.py')
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Again, we have a perfect solution. Here, we did not even need to simplify the code in the last iteration, as our `fitness_key()` function favors smaller implementations. Removing HTML MarkupLet us apply `Repairer` on our other ongoing example, namely `remove_html_markup()`.
###Code
def remove_html_markup(s): # type: ignore
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
def remove_html_markup_tree() -> ast.AST:
return ast.parse(inspect.getsource(remove_html_markup))
###Output
_____no_output_____
###Markdown
To run `Repairer` on `remove_html_markup()`, we need a test and a test suite. `remove_html_markup_test()` raises an exception if applying `remove_html_markup()` on the given `html` string does not yield the `plain` string.
###Code
def remove_html_markup_test(html: str, plain: str) -> None:
outcome = remove_html_markup(html)
assert outcome == plain, \
f"Got {repr(outcome)}, expected {repr(plain)}"
###Output
_____no_output_____
###Markdown
Now for the test suite. We use a simple fuzzing scheme to create dozens of passing and failing test cases in `REMOVE_HTML_PASSING_TESTCASES` and `REMOVE_HTML_FAILING_TESTCASES`, respectively. Excursion: Creating HTML Test Cases
###Code
def random_string(length: int = 5, start: int = ord(' '), end: int = ord('~')) -> str:
return "".join(chr(random.randrange(start, end + 1)) for i in range(length))
random_string()
def random_id(length: int = 2) -> str:
return random_string(start=ord('a'), end=ord('z'))
random_id()
def random_plain() -> str:
return random_string().replace('<', '').replace('>', '')
def random_string_noquotes() -> str:
return random_string().replace('"', '').replace("'", '')
def random_html(depth: int = 0) -> Tuple[str, str]:
prefix = random_plain()
tag = random_id()
if depth > 0:
html, plain = random_html(depth - 1)
else:
html = plain = random_plain()
attr = random_id()
value = '"' + random_string_noquotes() + '"'
postfix = random_plain()
return f'{prefix}<{tag} {attr}={value}>{html}</{tag}>{postfix}', \
prefix + plain + postfix
random_html()
def remove_html_testcase(expected: bool = True) -> Tuple[str, str]:
while True:
html, plain = random_html()
outcome = (remove_html_markup(html) == plain)
if outcome == expected:
return html, plain
REMOVE_HTML_TESTS = 100
REMOVE_HTML_PASSING_TESTCASES = \
[remove_html_testcase(True) for i in range(REMOVE_HTML_TESTS)]
REMOVE_HTML_FAILING_TESTCASES = \
[remove_html_testcase(False) for i in range(REMOVE_HTML_TESTS)]
###Output
_____no_output_____
###Markdown
End of Excursion Here is a passing test case:
###Code
REMOVE_HTML_PASSING_TESTCASES[0]
html, plain = REMOVE_HTML_PASSING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
Here is a failing test case (containing a double quote in the plain text)
###Code
REMOVE_HTML_FAILING_TESTCASES[0]
with ExpectError():
html, plain = REMOVE_HTML_FAILING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14031/2578453007.py", line 3, in <module>
remove_html_markup_test(html, plain)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14031/700130947.py", line 3, in remove_html_markup_test
assert outcome == plain, \
AssertionError: Got '3AGe7!%H</qcguk>6azh_', expected '3AGe7"!%H6azh_' (expected)
###Markdown
We run our tests, collecting the outcomes in `html_debugger`.
###Code
html_debugger = OchiaiDebugger()
for html, plain in (REMOVE_HTML_PASSING_TESTCASES +
REMOVE_HTML_FAILING_TESTCASES):
with html_debugger:
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
The suspiciousness distribution will not be of much help here – pretty much all lines in `remove_html_markup()` have the same suspiciousness.
###Code
html_debugger
###Output
_____no_output_____
###Markdown
Let us create our repairer and run it.
###Code
html_repairer = Repairer(html_debugger, log=True)
best_tree, fitness = html_repairer.repair(iterations=20)
# docassert
assert fitness < 1.0
###Output
_____no_output_____
###Markdown
We see that the "best" code is still our original code, with no changes. And we can set `iterations` to 50, 100, 200... – our `Repairer` won't be able to repair it.
###Code
quiz("Why couldn't `Repairer()` repair `remove_html_markup()`?",
[
"The population is too small!",
"The suspiciousness is too evenly distributed!",
"We need more test cases!",
"We need more iterations!",
"There is no statement in the source with a correct condition!",
"The population is too big!",
], '5242880 >> 20')
###Output
_____no_output_____
###Markdown
You can explore all of the hypotheses above by changing the appropriate parameters, but you won't be able to change the outcome. The problem is that, unlike `middle()`, there is no statement (or combination thereof) in `remove_html_markup()` that could be used to make the failure go away. For this, we need to mutate another aspect of the code, which we will explore in the next section. Mutating ConditionsThe `Repairer` class is very configurable. The individual steps in automated repair can all be replaced by providing own classes in the keyword arguments of its `__init__()` constructor:* To change fault localization, pass a different `debugger` that is a subclass of `RankingDebugger`.* To change the mutation operator, set `mutator_class` to a subclass of `StatementMutator`.* To change the crossover operator, set `crossover_class` to a subclass of `CrossoverOperator`.* To change the reduction algorithm, set `reducer_class` to a subclass of `Reducer`.In this section, we will explore how to extend the mutation operator such that it can mutate _conditions_ for control constructs such as `if`, `while`, or `for`. To this end, we introduce a new class `ConditionMutator` subclassing `StatementMutator`. Collecting ConditionsLet us start with a few simple supporting functions. The function `all_conditions()` retrieves all control conditions from an AST.
###Code
def all_conditions(trees: Union[ast.AST, List[ast.AST]],
tp: Optional[Type] = None) -> List[ast.expr]:
"""
Return all conditions from the AST (or AST list) `trees`.
If `tp` is given, return only elements of that type.
"""
if not isinstance(trees, list):
assert isinstance(trees, ast.AST)
trees = [trees]
visitor = ConditionVisitor()
for tree in trees:
visitor.visit(tree)
conditions = visitor.conditions
if tp is not None:
conditions = [c for c in conditions if isinstance(c, tp)]
return conditions
###Output
_____no_output_____
###Markdown
`all_conditions()` uses a `ConditionVisitor` class to walk the tree and collect the conditions:
###Code
class ConditionVisitor(NodeVisitor):
def __init__(self) -> None:
self.conditions: List[ast.expr] = []
self.conditions_seen: Set[str] = set()
super().__init__()
def add_conditions(self, node: ast.AST, attr: str) -> None:
elems = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems]
elems = cast(List[ast.expr], elems)
for elem in elems:
elem_str = ast.unparse(elem)
if elem_str not in self.conditions_seen:
self.conditions.append(elem)
self.conditions_seen.add(elem_str)
def visit_BoolOp(self, node: ast.BoolOp) -> ast.AST:
self.add_conditions(node, 'values')
return super().generic_visit(node)
def visit_UnaryOp(self, node: ast.UnaryOp) -> ast.AST:
if isinstance(node.op, ast.Not):
self.add_conditions(node, 'operand')
return super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> ast.AST:
if hasattr(node, 'test'):
self.add_conditions(node, 'test')
return super().generic_visit(node)
###Output
_____no_output_____
###Markdown
Here are all the conditions in `remove_html_markup()`. This is some material to construct new conditions from.
###Code
[ast.unparse(cond).strip()
for cond in all_conditions(remove_html_markup_tree())]
###Output
_____no_output_____
###Markdown
Mutating ConditionsHere comes our `ConditionMutator` class. We subclass from `StatementMutator` and set an attribute `self.conditions` containing all the conditions in the source. The method `choose_condition()` randomly picks a condition.
###Code
class ConditionMutator(StatementMutator):
"""Mutate conditions in an AST"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are as with `StatementMutator` constructor."""
super().__init__(*args, **kwargs)
self.conditions = all_conditions(self.source)
if self.log:
print("Found conditions",
[ast.unparse(cond).strip()
for cond in self.conditions])
def choose_condition(self) -> ast.expr:
"""Return a random condition from source."""
return copy.deepcopy(random.choice(self.conditions))
###Output
_____no_output_____
###Markdown
The actual mutation takes place in the `swap()` method. If the node to be replaced has a `test` attribute (i.e. a controlling predicate), then we pick a random condition `cond` from the source and randomly chose from:* **set**: We change `test` to `cond`.* **not**: We invert `test`.* **and**: We replace `test` by `cond and test`.* **or**: We replace `test` by `cond or test`.Over time, this might lead to operators propagating across the population.
###Code
class ConditionMutator(ConditionMutator):
def choose_bool_op(self) -> str:
return random.choice(['set', 'not', 'and', 'or'])
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` condition by a condition from `source`"""
if not hasattr(node, 'test'):
return super().swap(node)
node = cast(ast.If, node)
cond = self.choose_condition()
new_test = None
choice = self.choose_bool_op()
if choice == 'set':
new_test = cond
elif choice == 'not':
new_test = ast.UnaryOp(op=ast.Not(), operand=node.test)
elif choice == 'and':
new_test = ast.BoolOp(op=ast.And(), values=[cond, node.test])
elif choice == 'or':
new_test = ast.BoolOp(op=ast.Or(), values=[cond, node.test])
else:
raise ValueError("Unknown boolean operand")
if new_test:
# ast.copy_location(new_test, node)
node.test = new_test
return node
###Output
_____no_output_____
###Markdown
We can use the mutator just like `StatementMutator`, except that some of the mutations will also include new conditions:
###Code
mutator = ConditionMutator(source=all_statements(remove_html_markup_tree()),
log=True)
for i in range(10):
new_tree = mutator.mutate(remove_html_markup_tree())
###Output
2:insert: 'tag = False' becomes 'for c in s: tag = Fa...'
10:insert: 'tag = False' becomes 'tag = False'; 'out = out + c'
8:insert: 'tag = True' becomes 'if c == \'"\' or (c ==...'
12:insert: 'quote = not quote' becomes 'quote = not quote'; 'tag = True'
10:delete: 'tag = False' becomes 'pass'
12:insert: 'quote = not quote' becomes "if c == '>' and (not..."
3:insert: 'quote = False' becomes 'quote = False'; "out = ''"
14:swap: 'out = out + c' becomes 'quote = False'
12:insert: 'quote = not quote' becomes 'for c in s: quote = ...'
3:delete: 'quote = False' becomes 'pass'
###Markdown
Let us put our new mutator to action, again in a `Repairer()`. To activate it, all we need to do is to pass it as `mutator_class` keyword argument.
###Code
condition_repairer = Repairer(html_debugger,
mutator_class=ConditionMutator,
log=2)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
We might need more iterations for this one. Let us see...
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
repaired_source = ast.unparse(best_tree)
print_content(repaired_source, '.py')
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Success again! We have automatically repaired `remove_html_markup()` – the resulting code passes all tests, including those that were previously failing. Again, we can present the fix as a patch:
###Code
original_source = ast.unparse(remove_html_markup_tree())
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m210[39;49;00m,[34m53[39;49;00m +[34m210[39;49;00m,[34m39[39;49;00m @@
lse
- [34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
+ [34melif[39;49;00m tag [35mand[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
###Markdown
However, looking at the patch, one may come up with doubts.
###Code
quiz("Is this actually the best solution?",
[
"Yes, sure, of course. Why?",
"Err - what happened to single quotes?"
], 1 << 1)
###Output
_____no_output_____
###Markdown
Indeed – our solution does not seem to handle single quotes anymore. Why is that so?
###Code
quiz("Why aren't single quotes handled in the solution?",
[
"Because they're not important. "
"I mean, y'know, who uses 'em anyway?",
"Because they are not part of our tests? "
"Let me look up how they are constructed..."
], 1 << 1)
###Output
_____no_output_____
###Markdown
Correct! Our test cases do not include single quotes – at least not in the interior of HTML tags – and thus, automatic repair did not care to preserve their handling. How can we fix this? An easy way is to include an appropriate test case in our set – a test case that passes with the original `remove_html_markup()`, yet fails with the "repaired" `remove_html_markup()` as whosn above.
###Code
with html_debugger:
remove_html_markup_test("<foo quote='>abc'>me</foo>", "me")
###Output
_____no_output_____
###Markdown
Let us repeat the repair with the extended test set:
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
###Output
Evolving population: iteration 2/200 fitness = 1.0
New best code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
tag = [34mFalse[39;49;00m
[34mreturn[39;49;00m out
Reduced code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
Here is the final tree:
###Code
print_content(ast.unparse(best_tree), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
And here is its fitness:
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
The revised candidate now passes _all_ tests (including the tricky quote test we added last). Its condition now properly checks for `tag` _and_ both quotes. (The `tag` inside the parentheses is still redundant, but so be it.) From this example, we can learn a few lessons about the possibilities and risks of automated repair:* First, automatic repair is highly dependent on the quality of the checking tests. The risk is that the repair may overspecialize towards the test.* Second, when based on "plastic surgery", automated repair is highly dependent on the sources that program fragments are chosen from. If there is a hint of a solution somewhere in the code, there is a chance that automated repair will catch it up.* Third, automatic repair is a deeply heuristic approach. Its behavior will vary widely with any change to the parameters (and the underlying random number generators).* Fourth, automatic repair can take a long time. The examples we have in this chapter take less than a minute to compute, and neither Python nor our implementation is exactly fast. But as the search space grows, automated repair will take much longer.On the other hand, even an incomplete automated repair candidate can be much better than nothing at all – it may provide all the essential ingredients (such as the location or the involved variables) for a successful fix. When users of automated repair techniques are aware of its limitations and its assumptions, there is lots of potential in automated repair. Enjoy! Limitations The `Repairer` class is tested on our example programs, but not much more. Things that do not work include* Functions with inner functions are not repaired. Synopsis This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception. The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)``` Here is a complete example for the `middle()` program. This is the original source code of `middle()`:
###Code
# ignore
print_content(middle_source, '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melse[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:
###Code
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
The repairer is instantiated with the debugger used (`middle_debugger`):
###Code
middle_repairer = Repairer(middle_debugger)
###Output
_____no_output_____
###Markdown
The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).
###Code
tree, fitness = middle_repairer.repair()
###Output
_____no_output_____
###Markdown
The returned AST `tree` can be output via `ast.unparse()`:
###Code
print(ast.unparse(tree))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return x
elif x > y:
return y
elif x > z:
return x
return z
###Markdown
The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful. Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([Repairer, ConditionMutator, CrossoverOperator],
abstract_classes=[
NodeVisitor,
NodeTransformer
],
public_methods=[
Repairer.__init__,
Repairer.repair,
StatementMutator.__init__,
StatementMutator.mutate,
ConditionMutator.__init__,
CrossoverOperator.__init__,
CrossoverOperator.crossover,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Automated repair based on genetic optimization uses five ingredients: 1. A _test suite_ to determine passing and failing tests 2. _Defect localization_ (typically obtained from [statistical debugging](StatisticalDebugger.ipynb) with the test suite) to determine potential locations to be fixed 3. _Random code mutations_ and _crossover operations_ to create and evolve a population of inputs 4. A _fitness function_ and a _selection strategy_ to determine the part of the population that should be evolved further 5. A _reducer_ such as [delta debugging](DeltaDebugger.ipynb) to simplify the final candidate with the highest fitness.* The result of automated repair is a _fix candidate_ with the highest fitness for the given tests.* A _fix candidate_ is not guaranteed to be correct or optimal, but gives important hints on how to fix the program.* All of the above ingredients offer plenty of settings and alternatives to experiment with. BackgroundThe seminal work in automated repair is [GenProg](https://squareslab.github.io/genprog-code/) \cite{LeGoues2012}, which heavily inspired our `Repairer` implementation. Major differences between GenProg and `Repairer` include:* GenProg includes its own defect localization (which is also dynamically updated), whereas `Repairer` builds on earlier statistical debugging.* GenProg can apply multiple mutations on programs (or none at all), whereas `Repairer` applies exactly one mutation.* The `StatementMutator` used by `Repairer` includes various special cases for program structures (`if`, `for`, `while`...), whereas GenProg operates on statements only.* GenProg has been tested on large production programs.While GenProg is _the_ seminal work in the area (and arguably the most important software engineering research contribution of the 2010s), there have been a number of important extensions of automated repair. These include:* *AutoFix* \cite{Pei2014} leverages _program contracts_ (pre- and postconditions) to generate tests and assertions automatically. Not only do such [assertions](Assertions.ipynb) help in fault localization, they also allow for much better validation of fix candidates.* *SemFix* \cite{Nguyen2013} and its successor *[Angelix](http://angelix.io)* \cite{Mechtaev2016}introduce automated program repair based on _symbolic analysis_ rather than genetic optimization. This allows to leverage program semantics, which GenProg does not consider.To learn more about automated program repair, see [program-repair.org](http://program-repair.org), the community page dedicated to research in program repair. Exercises Exercise 1: Automated Repair ParametersAutomated Repair is influenced by a large number of design choices – the size of the population, the number of iterations, the genetic optimization strategy, and more. How do changes to these design choices affect its effectiveness? * Consider the constants defined in this chapter (such as `POPULATION_SIZE` or `WEIGHT_PASSING` vs. `WEIGHT_FAILING`). How do changes affect the effectiveness of automated repair?* As an effectiveness metric, consider the number of iterations it takes to produce a fix candidate.* Since genetic optimization is a random algorithm, you need to determine effectiveness averages over a large number of runs (say, 100). Exercise 2: Elitism[_Elitism_](https://en.wikipedia.org/wiki/Genetic_algorithmElitism) (also known as _elitist selection_) is a variant of genetic selection in which a small fraction of the fittest candidates of the last population are included unchanged in the offspring.* Implement elitist selection by subclassing the `evolve()` method. Experiment with various fractions (5%, 10%, 25%) of "elites" and see how this improves results. Exercise 3: Evolving ValuesFollowing the steps of `ConditionMutator`, implement a `ValueMutator` class that replaces one constant value by another one found in the source (say, `0` by `1` or `True` by `False`).For validation, consider the following failure in the `square_root()` function from the [chapter on assertions](Assertions.ipynb):
###Code
from Assertions import square_root # minor dependency
with ExpectError():
square_root_of_zero = square_root(0)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14031/1107282428.py", line 2, in <module>
square_root_of_zero = square_root(0)
File "/Users/zeller/Projects/debuggingbook/notebooks/Assertions.ipynb", line 61, in square_root
guess = (approx + x / approx) / 2
ZeroDivisionError: float division by zero (expected)
###Markdown
Can your `ValueMutator` automatically fix this failure? **Solution.** Your solution will be effective if it also includes named constants such as `None`.
###Code
import math
def square_root_fixed(x): # type: ignore
assert x >= 0 # precondition
approx = 0 # <-- FIX: Change `None` to 0
guess = x / 2
while approx != guess:
approx = guess
guess = (approx + x / approx) / 2
assert math.isclose(approx * approx, x)
return approx
square_root_fixed(0)
###Output
_____no_output_____
###Markdown
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("UJTf7cW0idI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in the [chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb).
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythonimport astortree, fitness = repairer.repair()print(astor.to_source(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer is instantiated with the debugger used (`middle_debugger`):```python>>> middle_repairer = Repairer(middle_debugger)```The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).```python>>> tree, fitness = middle_repairer.repair()```The returned AST `tree` can be output via `astor.to_source()`:```python>>> print(astor.to_source(tree))def middle(x, y, z): if y < z: if x < z: if x < y: return y else: return x elif x > y: return y elif x > z: return x return z```The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.```python>>> fitness1.0```Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful.Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e., how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:
###Code
from StatisticalDebugger import middle
# ignore
from bookutils import print_content
# ignore
import inspect
# ignore
_, first_lineno = inspect.getsourcelines(middle)
middle_source = inspect.getsource(middle)
print_content(middle_source, '.py', start_line_number=first_lineno)
###Output
710 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
711 [34mif[39;49;00m y < z:
712 [34mif[39;49;00m x < y:
713 [34mreturn[39;49;00m y
714 [34melif[39;49;00m x < z:
715 [34mreturn[39;49;00m y
716 [34melse[39;49;00m:
717 [34mif[39;49;00m x > y:
718 [34mreturn[39;49;00m y
719 [34melif[39;49;00m x > z:
720 [34mreturn[39;49;00m x
721 [34mreturn[39;49;00m z
###Markdown
In most cases, `middle()` just runs fine:
###Code
middle(4, 5, 6)
###Output
_____no_output_____
###Markdown
In some other cases, though, it does not work correctly:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:
###Code
def middle_sort_of_fixed(x, y, z): # type: ignore
return x
###Output
_____no_output_____
###Markdown
You will concur that the failure no longer occurs:
###Code
middle_sort_of_fixed(2, 1, 3)
###Output
_____no_output_____
###Markdown
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The common approach for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks. Note that running the test suite commonly takes the most time of automated repair, so a large test suite also comes with extra cost. Let us first focus on achieving high-quality repairs. Hence, we will use the extensive test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):
###Code
from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES
###Output
_____no_output_____
###Markdown
The `middle_test()` function fails whenever `middle()` returns an incorrect result:
###Code
def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
from ExpectError import ExpectError
with ExpectError():
middle_test(2, 1, 3)
###Output
Traceback (most recent call last):
File "<ipython-input-1-ae2957225406>", line 2, in <module>
middle_test(2, 1, 3)
File "<ipython-input-1-e1407680b9f2>", line 3, in middle_test
assert m == sorted([x, y, z])[1]
AssertionError (expected)
###Markdown
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
###Code
from StatisticalDebugger import OchiaiDebugger, RankingDebugger
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
We see that the upper half of the `middle()` code is definitely more suspicious:
###Code
middle_debugger
###Output
_____no_output_____
###Markdown
The most suspicious line is:
###Code
# ignore
location = middle_debugger.rank()[0]
(func_name, lineno) = location
lines, first_lineno = inspect.getsourcelines(middle)
print(lineno, end="")
print_content(lines[lineno - first_lineno], '.py')
###Output
715 [34mreturn[39;49;00m y
###Markdown
with a suspiciousness of:
###Code
# ignore
middle_debugger.suspiciousness(location)
###Output
_____no_output_____
###Markdown
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations:
###Code
import string
string.ascii_letters
len(string.ascii_letters + '_') * \
len(string.ascii_letters + '_' + string.digits) * \
len(string.ascii_letters + '_' + string.digits)
###Output
_____no_output_____
###Markdown
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. This insight has been dubbed the *plastic surgery hypothesis*: content of new code can often be assembled out of fragments of code that already exist in the code base \citeBarr2014}. For our "plastic surgery", we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function.
###Code
import ast
import astor
import inspect
from bookutils import print_content, show_ast
def middle_tree() -> ast.AST:
return ast.parse(inspect.getsource(middle))
show_ast(middle_tree())
###Output
_____no_output_____
###Markdown
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST.
###Code
print(ast.dump(middle_tree()))
###Output
Module(body=[FunctionDef(name='middle', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None), arg(arg='z', annotation=None)], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[If(test=Compare(left=Name(id='y', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[])])], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='x', ctx=Load()))], orelse=[])])]), Return(value=Name(id='z', ctx=Load()))], decorator_list=[], returns=None)])
###Markdown
This is the path to the first `return` statement:
###Code
ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore
###Output
_____no_output_____
###Markdown
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast).
###Code
from ast import NodeVisitor
# ignore
from typing import Any, Callable, Optional, Type, Tuple
from typing import Dict, Union, Set, List, cast
class StatementVisitor(NodeVisitor):
"""Visit all statements within function defs in an AST"""
def __init__(self) -> None:
self.statements: List[Tuple[ast.AST, str]] = []
self.func_name = ""
self.statements_seen: Set[Tuple[ast.AST, str]] = set()
super().__init__()
def add_statements(self, node: ast.AST, attr: str) -> None:
elems: List[ast.AST] = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems] # type: ignore
for elem in elems:
stmt = (elem, self.func_name)
if stmt in self.statements_seen:
continue
self.statements.append(stmt)
self.statements_seen.add(stmt)
def visit_node(self, node: ast.AST) -> None:
# Any node other than the ones listed below
self.add_statements(node, 'body')
self.add_statements(node, 'orelse')
def visit_Module(self, node: ast.Module) -> None:
# Module children are defs, classes and globals - don't add
super().generic_visit(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
# Class children are defs and globals - don't add
super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> None:
self.visit_node(node)
super().generic_visit(node)
def visit_FunctionDef(self,
node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None:
if not self.func_name:
self.func_name = node.name
self.visit_node(node)
super().generic_visit(node)
self.func_name = ""
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
return self.visit_FunctionDef(node)
###Output
_____no_output_____
###Markdown
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class.
###Code
def all_statements_and_functions(tree: ast.AST,
tp: Optional[Type] = None) -> \
List[Tuple[ast.AST, str]]:
"""
Return a list of pairs (`statement`, `function`) for all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
visitor = StatementVisitor()
visitor.visit(tree)
statements = visitor.statements
if tp is not None:
statements = [s for s in statements if isinstance(s[0], tp)]
return statements
def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]:
"""
Return a list of all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]
###Output
_____no_output_____
###Markdown
Here are all the `return` statements in `middle()`:
###Code
all_statements(middle_tree(), ast.Return)
all_statements_and_functions(middle_tree(), ast.If)
###Output
_____no_output_____
###Markdown
We can randomly pick an element:
###Code
import random
random_node = random.choice(all_statements(middle_tree()))
astor.to_source(random_node)
###Output
_____no_output_____
###Markdown
Mutating StatementsThe main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). The constructor provides various keyword arguments to configure the mutator.
###Code
from ast import NodeTransformer
import copy
class StatementMutator(NodeTransformer):
"""Mutate statements in an AST for automated repair."""
def __init__(self,
suspiciousness_func:
Optional[Callable[[Tuple[Callable, int]], float]] = None,
source: Optional[List[ast.AST]] = None,
log: bool = False) -> None:
"""
Constructor.
`suspiciousness_func` is a function that takes a location
(function, line_number) and returns a suspiciousness value
between 0 and 1.0. If not given, all locations get the same
suspiciousness of 1.0.
`source` is a list of statements to choose from.
"""
super().__init__()
self.log = log
if suspiciousness_func is None:
def suspiciousness_func(location: Tuple[Callable, int]) -> float:
return 1.0
assert suspiciousness_func is not None
self.suspiciousness_func: Callable = suspiciousness_func
if source is None:
source = []
self.source = source
if self.log > 1:
for i, node in enumerate(self.source):
print(f"Source for repairs #{i}:")
print_content(astor.to_source(node), '.py')
print()
print()
self.mutations = 0
###Output
_____no_output_____
###Markdown
Choosing Suspicious Statements to MutateWe start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization.
###Code
import warnings
class StatementMutator(StatementMutator):
def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float:
if not hasattr(stmt, 'lineno'):
warnings.warn(f"{self.format_node(stmt)}: Expected line number")
return 0.0
suspiciousness = self.suspiciousness_func((func_name, stmt.lineno))
if suspiciousness is None: # not executed
return 0.0
return suspiciousness
def format_node(self, node: ast.AST) -> str:
...
###Output
_____no_output_____
###Markdown
The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen.
###Code
class StatementMutator(StatementMutator):
def node_to_be_mutated(self, tree: ast.AST) -> ast.AST:
statements = all_statements_and_functions(tree)
assert len(statements) > 0, "No statements"
weights = [self.node_suspiciousness(stmt, func_name)
for stmt, func_name in statements]
stmts = [stmt for stmt, func_name in statements]
if self.log > 1:
print("Weights:")
for i, stmt in enumerate(statements):
node, func_name = stmt
print(f"{weights[i]:.2} {self.format_node(node)}")
if sum(weights) == 0.0:
# No suspicious line
return random.choice(stmts)
else:
return random.choices(stmts, weights=weights)[0]
###Output
_____no_output_____
###Markdown
Choosing a Mutation Method The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node.According to the rules of `NodeTransformer`, the mutation method can return* a new node or a list of nodes, replacing the current node;* `None`, deleting it; or* the node itself, keeping things as they are.
###Code
import re
RE_SPACE = re.compile(r'[ \t\n]+')
class StatementMutator(StatementMutator):
def choose_op(self) -> Callable:
return random.choice([self.insert, self.swap, self.delete])
def visit(self, node: ast.AST) -> ast.AST:
super().visit(node) # Visits (and transforms?) children
if not node.mutate_me: # type: ignore
return node
op = self.choose_op()
new_node = op(node)
self.mutations += 1
if self.log:
print(f"{node.lineno:4}:{op.__name__ + ':':7} "
f"{self.format_node(node)} "
f"becomes {self.format_node(new_node)}")
return new_node
###Output
_____no_output_____
###Markdown
Swapping StatementsOur first mutator is `swap()`, which replaces the current node `NODE` by a random node found in `source` (using a newly defined `choose_statement()`).As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```pythonif P: BODY```we thus only insert ```pythonif P: pass```since the statements in `BODY` have a later chance to get inserted. The same holds for all constructs that have a `BODY`, i.e. `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def choose_statement(self) -> ast.AST:
return copy.deepcopy(random.choice(self.source))
class StatementMutator(StatementMutator):
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` with a random node from `source`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt):
# The source `if P: X` is added as `if P: pass`
if hasattr(new_node, 'body'):
new_node.body = [ast.Pass()] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Inserting StatementsOur next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node `NODE`. (If `NODE` is a `return` statement, then we insert the new node _before_ `NODE`.)If the statement to be inserted has the form```pythonif P: BODY```we only insert the "header" of the `if`, resulting in```pythonif P: NODE```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]:
"""Insert a random node from `source` after `node`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'):
# Inserting `if P: X` as `if P:`
new_node.body = [node] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
# Only insert before `return`, not after it
if isinstance(node, ast.Return):
if isinstance(new_node, ast.Return):
return new_node
else:
return [new_node, node]
return [node, new_node]
###Output
_____no_output_____
###Markdown
Deleting StatementsOur last mutator is `delete()`, which deletes the current node `NODE`. The standard case is to replace `NODE` by a `pass` statement.If the statement to be deleted has the form```pythonif P: BODY```we only delete the "header" of the `if`, resulting in```pythonBODY```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more. If the statement to be deleted has multiple branches, a random branch is chosen (e.g., the `else` branch of an `if` statement).
###Code
class StatementMutator(StatementMutator):
def delete(self, node: ast.AST) -> None:
"""Delete `node`."""
branches = [attr for attr in ['body', 'orelse', 'finalbody']
if hasattr(node, attr) and getattr(node, attr)]
if branches:
# Replace `if P: S` by `S`
branch = random.choice(branches)
new_node = getattr(node, branch)
return new_node
if isinstance(node, ast.stmt):
# Avoid empty bodies; make this a `pass` statement
new_node = ast.Pass()
ast.copy_location(new_node, node)
return new_node
return None # Just delete
from bookutils import quiz
quiz("Why are statements replaced by `pass` rather than deleted?",
[
"Because `if P: pass` is valid Python, while `if P:` is not",
"Because in Python, bodies for `if`, `while`, etc. cannot be empty",
"Because a `pass` node makes a target for future mutations",
"Because it causes the tests to pass"
], '[3 ^ n for n in range(3)]')
###Output
_____no_output_____
###Markdown
Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further. HelpersFor logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node.
###Code
class StatementMutator(StatementMutator):
NODE_MAX_LENGTH = 20
def format_node(self, node: ast.AST) -> str:
"""Return a string representation for `node`."""
if node is None:
return "None"
if isinstance(node, list):
return "; ".join(self.format_node(elem) for elem in node)
s = RE_SPACE.sub(' ', astor.to_source(node)).strip()
if len(s) > self.NODE_MAX_LENGTH - len("..."):
s = s[:self.NODE_MAX_LENGTH] + "..."
return repr(s)
###Output
_____no_output_____
###Markdown
All TogetherLet us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation.
###Code
class StatementMutator(StatementMutator):
def mutate(self, tree: ast.AST) -> ast.AST:
"""Mutate the given AST `tree` in place. Return mutated tree."""
assert isinstance(tree, ast.AST)
tree = copy.deepcopy(tree)
if not self.source:
self.source = all_statements(tree)
for node in ast.walk(tree):
node.mutate_me = False # type: ignore
node = self.node_to_be_mutated(tree)
node.mutate_me = True # type: ignore
self.mutations = 0
tree = self.visit(tree)
if self.mutations == 0:
warnings.warn("No mutations found")
ast.fix_missing_locations(tree)
return tree
###Output
_____no_output_____
###Markdown
Here are a number of transformations applied by `StatementMutator`:
###Code
mutator = StatementMutator(log=True)
for i in range(10):
new_tree = mutator.mutate(middle_tree())
###Output
9:insert: 'return y' becomes 'return y'
8:insert: 'if x > y: return y e...' becomes 'if x < y: if x > y: ...'
12:insert: 'return z' becomes 'if y < z: return z...'
3:swap: 'if x < y: return y e...' becomes 'return x'
3:swap: 'if x < y: return y e...' becomes 'return z'
3:swap: 'if x < y: return y e...' becomes 'return x'
11:swap: 'return x' becomes 'return y'
10:insert: 'if x > z: return x...' becomes 'if x > z: return x...'; 'return z'
12:delete: 'return z' becomes 'pass'
8:swap: 'if x > y: return y e...' becomes 'if y < z: pass'
###Markdown
This is the effect of the last mutator applied on `middle`:
###Code
print_content(astor.to_source(new_tree), '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m y < z:
[34mpass[39;49;00m
[34mreturn[39;49;00m z
###Markdown
FitnessNow that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate. Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests.
###Code
WEIGHT_PASSING = 0.99
WEIGHT_FAILING = 0.01
def middle_fitness(tree: ast.AST) -> float:
"""Compute fitness of a `middle()` candidate given in `tree`"""
original_middle = middle
try:
code = compile(tree, '<fitness>', 'exec')
except ValueError:
return 0 # Compilation error
exec(code, globals())
passing_passed = 0
failing_passed = 0
# Test how many of the passing runs pass
for x, y, z in MIDDLE_PASSING_TESTCASES:
try:
middle_test(x, y, z)
passing_passed += 1
except AssertionError:
pass
passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES)
# Test how many of the failing runs pass
for x, y, z in MIDDLE_FAILING_TESTCASES:
try:
middle_test(x, y, z)
failing_passed += 1
except AssertionError:
pass
failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES)
fitness = (WEIGHT_PASSING * passing_ratio +
WEIGHT_FAILING * failing_ratio)
globals()['middle'] = original_middle
return fitness
###Output
_____no_output_____
###Markdown
Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones).
###Code
middle_fitness(middle_tree())
###Output
_____no_output_____
###Markdown
Our "sort of fixed" version of `middle()` gets a much lower fitness:
###Code
middle_fitness(ast.parse("def middle(x, y, z): return x"))
###Output
_____no_output_____
###Markdown
In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)
###Code
from StatisticalDebugger import middle_fixed
middle_fixed_source = \
inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip()
middle_fitness(ast.parse(middle_fixed_source))
###Output
_____no_output_____
###Markdown
PopulationWe now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012}).
###Code
POPULATION_SIZE = 40
middle_mutator = StatementMutator()
MIDDLE_POPULATION = [middle_tree()] + \
[middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]
###Output
_____no_output_____
###Markdown
We sort the fix candidates according to their fitness. This actually runs all tests on all candidates.
###Code
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
###Output
_____no_output_____
###Markdown
The candidate with the highest fitness is still our original (faulty) `middle()` code:
###Code
print(astor.to_source(MIDDLE_POPULATION[0]),
middle_fitness(MIDDLE_POPULATION[0]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
elif x > y:
return y
elif x > z:
return x
return z
0.99
###Markdown
At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:
###Code
print(astor.to_source(MIDDLE_POPULATION[-1]),
middle_fitness(MIDDLE_POPULATION[-1]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
else:
return y
return z
0.5445
###Markdown
EvolutionTo evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates.
###Code
def evolve_middle() -> None:
global MIDDLE_POPULATION
source = all_statements(middle_tree())
mutator = StatementMutator(source=source)
n = len(MIDDLE_POPULATION)
offspring: List[ast.AST] = []
while len(offspring) < n:
parent = random.choice(MIDDLE_POPULATION)
offspring.append(mutator.mutate(parent))
MIDDLE_POPULATION += offspring
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
MIDDLE_POPULATION = MIDDLE_POPULATION[:n]
###Output
_____no_output_____
###Markdown
This is what happens when evolving our population for the first time; the original source is still our best candidate.
###Code
evolve_middle()
tree = MIDDLE_POPULATION[0]
print(astor.to_source(tree), middle_fitness(tree))
# docassert
assert middle_fitness(tree) < 1.0
###Output
_____no_output_____
###Markdown
However, nothing keeps us from evolving for a few generations more...
###Code
for i in range(50):
evolve_middle()
best_middle_tree = MIDDLE_POPULATION[0]
fitness = middle_fitness(best_middle_tree)
print(f"\rIteration {i:2}: fitness = {fitness} ", end="")
if fitness >= 1.0:
break
# docassert
assert middle_fitness(best_middle_tree) >= 1.0
###Output
_____no_output_____
###Markdown
Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:
###Code
print_content(astor.to_source(best_middle_tree), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mif[39;49;00m x < z:
5 [34mreturn[39;49;00m y
6 [34melif[39;49;00m x < z:
7 [34mreturn[39;49;00m x
8 [34melif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melse[39;49;00m:
11 [34mif[39;49;00m x > z:
12 [34mreturn[39;49;00m x
13 [34mreturn[39;49;00m z
14 [34mreturn[39;49;00m z
###Markdown
... and yes, it passes all tests:
###Code
original_middle = middle
code = compile(best_middle_tree, '<string>', 'exec')
exec(code, globals())
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
middle_test(x, y, z)
middle = original_middle
###Output
_____no_output_____
###Markdown
As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix. However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements.
###Code
quiz("Some of the lines in our fix candidate are redundant. "
"Which are these?",
[
"Line 3: `if x < y:`",
"Line 4: `if x < z:`",
"Line 5: `return y`",
"Line 13: `return z`"
], '[eval(chr(100 - x)) for x in [48, 50]]')
###Output
_____no_output_____
###Markdown
Simplifying As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements. The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
from DeltaDebugger import DeltaDebugger
middle_lines = astor.to_source(best_middle_tree).strip().split('\n')
def test_middle_lines(lines: List[str]) -> None:
source = "\n".join(lines)
tree = ast.parse(source)
assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0
with DeltaDebugger() as dd:
test_middle_lines(middle_lines)
reduced_lines = dd.min_args()['lines']
reduced_source = "\n".join(reduced_lines)
repaired_source = astor.to_source(ast.parse(reduced_source)) # normalize
print_content(repaired_source, '.py')
# docassert
assert len(reduced_lines) < len(middle_lines)
###Output
_____no_output_____
###Markdown
Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:
###Code
original_source = astor.to_source(ast.parse(middle_source)) # normalize
from ChangeDebugger import diff, print_patch # minor dependency
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m87[39;49;00m,[34m37[39;49;00m +[34m87[39;49;00m,[34m37[39;49;00m @@
x < z:
- [34mreturn[39;49;00m y
+ [34mreturn[39;49;00m x
[34melif[39;49;00m
###Markdown
We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code. CrossoverSo far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" children, we pick a _crossover point_ and exchange the strands at this very point: We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as```pythoncrossover = CrossoverOperator()crossover.crossover(tree_p1, tree_p2)```where `tree_p1` and `tree_p2` are two ASTs that are changed in place. Excursion: Implementing Crossover Crossing Statement Lists Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:
###Code
def p1(): # type: ignore
a = 1
b = 2
c = 3
def p2(): # type: ignore
x = 1
y = 2
z = 3
###Output
_____no_output_____
###Markdown
Then a crossover operation would produce one child with a body```pythona = 1y = 2z = 3```and another child with a body```pythonx = 1b = 2c = 3``` We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`.
###Code
class CrossoverOperator:
"""A class for performing statement crossover of Python programs"""
def __init__(self, log: bool = False):
"""Constructor. If `log` is set, turn on logging."""
self.log = log
def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \
Tuple[List[ast.AST], List[ast.AST]]:
"""Crossover the statement lists `body_1` x `body_2`. Return new lists."""
assert isinstance(body_1, list)
assert isinstance(body_2, list)
crossover_point_1 = len(body_1) // 2
crossover_point_2 = len(body_2) // 2
return (body_1[:crossover_point_1] + body_2[crossover_point_2:],
body_2[:crossover_point_2] + body_1[crossover_point_1:])
###Output
_____no_output_____
###Markdown
Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:
###Code
tree_p1: ast.Module = ast.parse(inspect.getsource(p1))
tree_p2: ast.Module = ast.parse(inspect.getsource(p2))
body_p1 = tree_p1.body[0].body # type: ignore
body_p2 = tree_p2.body[0].body # type: ignore
body_p1
crosser = CrossoverOperator()
tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore
print_content(astor.to_source(tree_p1), '.py')
print_content(astor.to_source(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
x = [34m1[39;49;00m
b = [34m2[39;49;00m
c = [34m3[39;49;00m
###Markdown
Applying Crossover on ProgramsApplying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program functionality, other than introducing errors due to dependencies.
###Code
class CrossoverOperator(CrossoverOperator):
# In modules and class defs, the ordering of elements does not matter (much)
SKIP_LIST = {ast.Module, ast.ClassDef}
def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool:
if any(isinstance(tree, cls) for cls in self.SKIP_LIST):
return False
body = getattr(tree, body_attr, [])
return body and len(body) >= 2
###Output
_____no_output_____
###Markdown
Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes two ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.`) and $l_2$ (from `t2.`).If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise* If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$.* Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs.`crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool:
"""
Crossover the bodies `body_attr` of two trees `t1` and `t2`.
Return True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
assert isinstance(body_attr, str)
if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None):
return False
if self.crossover_branches(t1, t2):
return True
if self.log > 1:
print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}")
body_1 = getattr(t1, body_attr)
body_2 = getattr(t2, body_attr)
# If both trees have the attribute, we can cross their bodies
if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr):
if self.log:
print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}")
new_body_1, new_body_2 = self.cross_bodies(body_1, body_2)
setattr(t1, body_attr, new_body_1)
setattr(t2, body_attr, new_body_2)
return True
# Strategy 1: Find matches in class/function of same name
for child_1 in body_1:
if hasattr(child_1, 'name'):
for child_2 in body_2:
if (hasattr(child_2, 'name') and
child_1.name == child_2.name):
if self.crossover_attr(child_1, child_2, body_attr):
return True
# Strategy 2: Find matches anywhere
for child_1 in random.sample(body_1, len(body_1)):
for child_2 in random.sample(body_2, len(body_2)):
if self.crossover_attr(child_1, child_2, body_attr):
return True
return False
###Output
_____no_output_____
###Markdown
We have a special case for `if` nodes, where we can cross their body and `else` branches. (In Python, `for` and `while` also have `else` branches, but swapping these with loop bodies is likely to create havoc.)
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool:
"""Special case:
`t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'`
becomes
`t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1`
Returns True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and
hasattr(t2, 'body') and hasattr(t2, 'orelse')):
t1 = cast(ast.If, t1) # keep mypy happy
t2 = cast(ast.If, t2)
if self.log:
print(f"Crossing branches {t1} x {t2}")
t1.body, t1.orelse, t2.body, t2.orelse = \
t2.orelse, t2.body, t1.orelse, t1.body
return True
return False
###Output
_____no_output_____
###Markdown
The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]:
"""Do a crossover of ASTs `t1` and `t2`.
Raises `CrossoverError` if no crossover is found."""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
for body_attr in ['body', 'orelse', 'finalbody']:
if self.crossover_attr(t1, t2, body_attr):
return t1, t2
raise CrossoverError("No crossover found")
class CrossoverError(ValueError):
pass
###Output
_____no_output_____
###Markdown
End of Excursion Crossover in Action Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:
###Code
def p1(): # type: ignore
if True:
print(1)
print(2)
print(3)
def p2(): # type: ignore
if True:
print(a)
print(b)
else:
print(c)
print(d)
###Output
_____no_output_____
###Markdown
We invoke the `crossover()` method with two ASTs from `p1` and `p2`:
###Code
crossover = CrossoverOperator()
tree_p1 = ast.parse(inspect.getsource(p1))
tree_p2 = ast.parse(inspect.getsource(p2))
crossover.crossover(tree_p1, tree_p2);
###Output
_____no_output_____
###Markdown
Here is the crossed offspring, mixing statement lists of `p1` and `p2`:
###Code
print_content(astor.to_source(tree_p1), '.py')
print_content(astor.to_source(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34melse[39;49;00m:
[36mprint[39;49;00m([34m1[39;49;00m)
[36mprint[39;49;00m([34m2[39;49;00m)
[36mprint[39;49;00m([34m3[39;49;00m)
###Markdown
Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`.
###Code
middle_t1, middle_t2 = crossover.crossover(middle_tree(),
ast.parse(inspect.getsource(p2)))
###Output
_____no_output_____
###Markdown
We see how the resulting offspring encompasses elements of both sources:
###Code
print_content(astor.to_source(middle_t1), '.py')
print_content(astor.to_source(middle_t2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34melif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
###Markdown
A Repairer ClassSo far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate:```pythondebugger = OchiaiDebugger()with debugger: with debugger: ...repairer = Repairer(debugger)repairer.repair()``` Excursion: Implementing Repairer The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below.
###Code
from StackInspector import StackInspector # minor dependency
class Repairer(StackInspector):
"""A class for automatic repair of Python programs"""
def __init__(self, debugger: RankingDebugger, *,
targets: Optional[List[Any]] = None,
sources: Optional[List[Any]] = None,
log: Union[bool, int] = False,
mutator_class: Type = StatementMutator,
crossover_class: Type = CrossoverOperator,
reducer_class: Type = DeltaDebugger,
globals: Optional[Dict[str, Any]] = None):
"""Constructor.
`debugger`: a `RankingDebugger` to take tests and coverage from.
`targets`: a list of functions/modules to be repaired.
(default: the covered functions in `debugger`, except tests)
`sources`: a list of functions/modules to take repairs from.
(default: same as `targets`)
`globals`: if given, a `globals()` dict for executing targets
(default: `globals()` of caller)"""
assert isinstance(debugger, RankingDebugger)
self.debugger = debugger
self.log = log
if targets is None:
targets = self.default_functions()
if not targets:
raise ValueError("No targets to repair")
if sources is None:
sources = self.default_functions()
if not sources:
raise ValueError("No sources to take repairs from")
if self.debugger.function() is None:
raise ValueError("Multiple entry points observed")
self.target_tree: ast.AST = self.parse(targets)
self.source_tree: ast.AST = self.parse(sources)
self.log_tree("Target code to be repaired:", self.target_tree)
if ast.dump(self.target_tree) != ast.dump(self.source_tree):
self.log_tree("Source code to take repairs from:",
self.source_tree)
self.fitness_cache: Dict[str, float] = {}
self.mutator: StatementMutator = \
mutator_class(
source=all_statements(self.source_tree),
suspiciousness_func=self.debugger.suspiciousness,
log=(self.log >= 3))
self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3))
self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3))
if globals is None:
globals = self.caller_globals() # see below
self.globals = globals
###Output
_____no_output_____
###Markdown
When we access or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`. Helper FunctionsThe constructor uses a number of helper functions to create its environment.
###Code
class Repairer(Repairer):
def getsource(self, item: Union[str, Any]) -> str:
"""Get the source for `item`. Can also be a string."""
if isinstance(item, str):
item = self.globals[item]
return inspect.getsource(item)
class Repairer(Repairer):
def default_functions(self) -> List[Callable]:
"""Return the set of functions to be repaired.
Functions whose names start or end in `test` are excluded."""
def is_test(name: str) -> bool:
return name.startswith('test') or name.endswith('test')
return [func for func in self.debugger.covered_functions()
if not is_test(func.__name__)]
class Repairer(Repairer):
def log_tree(self, description: str, tree: Any) -> None:
"""Print out `tree` as source code prefixed by `description`."""
if self.log:
print(description)
print_content(astor.to_source(tree), '.py')
print()
print()
class Repairer(Repairer):
def parse(self, items: List[Any]) -> ast.AST:
"""Read in a list of items into a single tree"""
tree = ast.parse("")
for item in items:
if isinstance(item, str):
item = self.globals[item]
item_lines, item_first_lineno = inspect.getsourcelines(item)
try:
item_tree = ast.parse("".join(item_lines))
except IndentationError:
# inner function or likewise
warnings.warn(f"Can't parse {item.__name__}")
continue
ast.increment_lineno(item_tree, item_first_lineno - 1)
tree.body += item_tree.body
return tree
###Output
_____no_output_____
###Markdown
Running TestsNow that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected.
###Code
class Repairer(Repairer):
def run_test_set(self, test_set: str, validate: bool = False) -> int:
"""
Run given `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
If `validate` is set, check expectations.
Return number of passed tests.
"""
passed = 0
collectors = self.debugger.collectors[test_set]
function = self.debugger.function()
assert function is not None
# FIXME: function may have been redefined
for c in collectors:
if self.log >= 4:
print(f"Testing {c.id()}...", end="")
try:
function(**c.args())
except Exception as err:
if self.log >= 4:
print(f"failed ({err.__class__.__name__})")
if validate and test_set == self.debugger.PASS:
raise err.__class__(
f"{c.id()} should have passed, but failed")
continue
passed += 1
if self.log >= 4:
print("passed")
if validate and test_set == self.debugger.FAIL:
raise FailureNotReproducedError(
f"{c.id()} should have failed, but passed")
return passed
class FailureNotReproducedError(ValueError):
pass
###Output
_____no_output_____
###Markdown
Here is how we use `run_tests_set()`:
###Code
repairer = Repairer(middle_debugger)
assert repairer.run_test_set(middle_debugger.PASS) == \
len(MIDDLE_PASSING_TESTCASES)
assert repairer.run_test_set(middle_debugger.FAIL) == 0
###Output
_____no_output_____
###Markdown
The method `run_tests()` runs passing and failing tests, weighing the passed test cases to obtain the overall fitness.
###Code
class Repairer(Repairer):
def weight(self, test_set: str) -> float:
"""
Return the weight of `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
"""
return {
self.debugger.PASS: WEIGHT_PASSING,
self.debugger.FAIL: WEIGHT_FAILING
}[test_set]
def run_tests(self, validate: bool = False) -> float:
"""Run passing and failing tests, returning weighted fitness."""
fitness = 0.0
for test_set in [self.debugger.PASS, self.debugger.FAIL]:
passed = self.run_test_set(test_set, validate=validate)
ratio = passed / len(self.debugger.collectors[test_set])
fitness += self.weight(test_set) * ratio
return fitness
###Output
_____no_output_____
###Markdown
The method `validate()` ensures the observed tests can be adequately reproduced.
###Code
class Repairer(Repairer):
def validate(self) -> None:
fitness = self.run_tests(validate=True)
assert fitness == self.weight(self.debugger.PASS)
repairer = Repairer(middle_debugger)
repairer.validate()
###Output
_____no_output_____
###Markdown
(Re)defining FunctionsOur `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness.
###Code
class Repairer(Repairer):
def fitness(self, tree: ast.AST) -> float:
"""Test `tree`, returning its fitness"""
key = cast(str, ast.dump(tree))
if key in self.fitness_cache:
return self.fitness_cache[key]
# Save defs
original_defs: Dict[str, Any] = {}
for name in self.toplevel_defs(tree):
if name in self.globals:
original_defs[name] = self.globals[name]
else:
warnings.warn(f"Couldn't find definition of {repr(name)}")
assert original_defs, f"Couldn't find any definition"
if self.log >= 3:
print("Repair candidate:")
print_content(astor.to_source(tree), '.py')
print()
# Create new definition
try:
code = compile(tree, '<Repairer>', 'exec')
except ValueError: # Compilation error
code = None
if code is None:
if self.log >= 3:
print(f"Fitness = 0.0 (compilation error)")
fitness = 0.0
return fitness
# Execute new code, defining new functions in `self.globals`
exec(code, self.globals)
# Set new definitions in the namespace (`__globals__`)
# of the function we will be calling.
function = self.debugger.function()
assert function is not None
assert hasattr(function, '__globals__')
for name in original_defs:
function.__globals__[name] = self.globals[name] # type: ignore
fitness = self.run_tests(validate=False)
# Restore definitions
for name in original_defs:
function.__globals__[name] = original_defs[name] # type: ignore
self.globals[name] = original_defs[name]
if self.log >= 3:
print(f"Fitness = {fitness}")
self.fitness_cache[key] = fitness
return fitness
###Output
_____no_output_____
###Markdown
The helper function `toplevel_defs()` helps saving and restoring the environment before and after redefining the function under repair.
###Code
class Repairer(Repairer):
def toplevel_defs(self, tree: ast.AST) -> List[str]:
"""Return a list of names of defined functions and classes in `tree`"""
visitor = DefinitionVisitor()
visitor.visit(tree)
assert hasattr(visitor, 'definitions')
return visitor.definitions
class DefinitionVisitor(NodeVisitor):
def __init__(self) -> None:
self.definitions: List[str] = []
def add_definition(self, node: Union[ast.ClassDef,
ast.FunctionDef,
ast.AsyncFunctionDef]) -> None:
self.definitions.append(node.name)
def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
self.add_definition(node)
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
self.add_definition(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
self.add_definition(node)
###Output
_____no_output_____
###Markdown
Here's an example for `fitness()`:
###Code
repairer = Repairer(middle_debugger, log=1)
good_fitness = repairer.fitness(middle_tree())
good_fitness
# docassert
assert good_fitness >= 0.99, "fitness() failed"
bad_middle_tree = ast.parse("def middle(x, y, z): return x")
bad_fitness = repairer.fitness(bad_middle_tree)
bad_fitness
# docassert
assert bad_fitness < 0.5, "fitness() failed"
###Output
_____no_output_____
###Markdown
RepairingNow for the actual `repair()` method, which creates a `population` and then evolves it until the fitness is 1.0 or the given number of iterations is spent.
###Code
import traceback
class Repairer(Repairer):
def initial_population(self, size: int) -> List[ast.AST]:
"""Return an initial population of size `size`"""
return [self.target_tree] + \
[self.mutator.mutate(copy.deepcopy(self.target_tree))
for i in range(size - 1)]
def repair(self, population_size: int = POPULATION_SIZE, iterations: int = 100) -> \
Tuple[ast.AST, float]:
"""
Repair the function we collected test runs from.
Use a population size of `population_size` and
at most `iterations` iterations.
Returns a pair (`ast`, `fitness`) where
`ast` is the AST of the repaired function, and
`fitness` is its fitness (between 0 and 1.0)
"""
self.validate()
population = self.initial_population(population_size)
last_key = ast.dump(self.target_tree)
for iteration in range(iterations):
population = self.evolve(population)
best_tree = population[0]
fitness = self.fitness(best_tree)
if self.log:
print(f"Evolving population: "
f"iteration{iteration:4}/{iterations} "
f"fitness = {fitness:.5} \r", end="")
if self.log >= 2:
best_key = ast.dump(best_tree)
if best_key != last_key:
print()
print()
self.log_tree(f"New best code (fitness = {fitness}):",
best_tree)
last_key = best_key
if fitness >= 1.0:
break
if self.log:
print()
if self.log and self.log < 2:
self.log_tree(f"Best code (fitness = {fitness}):", best_tree)
best_tree = self.reduce(best_tree)
fitness = self.fitness(best_tree)
self.log_tree(f"Reduced code (fitness = {fitness}):", best_tree)
return best_tree, fitness
###Output
_____no_output_____
###Markdown
EvolvingThe evolution of our population takes place in the `evolve()` method. In contrast to the `evolve_middle()` function, above, we use crossover to create the offspring, which we still mutate afterwards.
###Code
class Repairer(Repairer):
def evolve(self, population: List[ast.AST]) -> List[ast.AST]:
"""Evolve the candidate population by mutating and crossover."""
n = len(population)
# Create offspring as crossover of parents
offspring: List[ast.AST] = []
while len(offspring) < n:
parent_1 = copy.deepcopy(random.choice(population))
parent_2 = copy.deepcopy(random.choice(population))
try:
self.crossover.crossover(parent_1, parent_2)
except CrossoverError:
pass # Just keep parents
offspring += [parent_1, parent_2]
# Mutate offspring
offspring = [self.mutator.mutate(tree) for tree in offspring]
# Add it to population
population += offspring
# Keep the fitter part of the population
population.sort(key=self.fitness_key, reverse=True)
population = population[:n]
return population
###Output
_____no_output_____
###Markdown
A second difference is that we not only sort by fitness, but also by tree size – with equal fitness, a smaller tree thus will be favored. This helps keeping fixes and patches small.
###Code
class Repairer(Repairer):
def fitness_key(self, tree: ast.AST) -> Tuple[float, int]:
"""Key to be used for sorting the population"""
tree_size = len([node for node in ast.walk(tree)])
return (self.fitness(tree), -tree_size)
###Output
_____no_output_____
###Markdown
SimplifyingThe last step in repairing is simplifying the code. As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of superfluous statements. To this end, we convert the tree to lines, run delta debugging on them, and then convert it back to a tree.
###Code
class Repairer(Repairer):
def reduce(self, tree: ast.AST) -> ast.AST:
"""Simplify `tree` using delta debugging."""
original_fitness = self.fitness(tree)
source_lines = astor.to_source(tree).split('\n')
with self.reducer:
self.test_reduce(source_lines, original_fitness)
reduced_lines = self.reducer.min_args()['source_lines']
reduced_source = "\n".join(reduced_lines)
return ast.parse(reduced_source)
###Output
_____no_output_____
###Markdown
As dicussed above, we simplify the code by having the test function (`test_reduce()`) declare reaching the maximum fitness obtained so far as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
class Repairer(Repairer):
def test_reduce(self, source_lines: List[str], original_fitness: float) -> None:
"""Test function for delta debugging."""
try:
source = "\n".join(source_lines)
tree = ast.parse(source)
fitness = self.fitness(tree)
assert fitness < original_fitness
except AssertionError:
raise
except SyntaxError:
raise
except IndentationError:
raise
except Exception:
# traceback.print_exc() # Uncomment to see internal errors
raise
###Output
_____no_output_____
###Markdown
End of Excursion Repairer in ActionLet us go and apply `Repairer` in practice. We initialize it with `middle_debugger`, which has (still) collected the passing and failing runs for `middle_test()`. We also set `log` for some diagnostics along the way.
###Code
repairer = Repairer(middle_debugger, log=True)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We now invoke `repair()` to evolve our population. After a few iterations, we find a best tree with perfect fitness.
###Code
best_tree, fitness = repairer.repair()
print_content(astor.to_source(best_tree), '.py')
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Again, we have a perfect solution. Here, we did not even need to simplify the code in the last iteration, as our `fitness_key()` function favors smaller implementations. Removing HTML MarkupLet us apply `Repairer` on our other ongoing example, namely `remove_html_markup()`.
###Code
def remove_html_markup(s): # type: ignore
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
def remove_html_markup_tree() -> ast.AST:
return ast.parse(inspect.getsource(remove_html_markup))
###Output
_____no_output_____
###Markdown
To run `Repairer` on `remove_html_markup()`, we need a test and a test suite. `remove_html_markup_test()` raises an exception if applying `remove_html_markup()` on the given `html` string does not yield the `plain` string.
###Code
def remove_html_markup_test(html: str, plain: str) -> None:
outcome = remove_html_markup(html)
assert outcome == plain, \
f"Got {repr(outcome)}, expected {repr(plain)}"
###Output
_____no_output_____
###Markdown
Now for the test suite. We use a simple fuzzing scheme to create dozens of passing and failing test cases in `REMOVE_HTML_PASSING_TESTCASES` and `REMOVE_HTML_FAILING_TESTCASES`, respectively. Excursion: Creating HTML Test Cases
###Code
def random_string(length: int = 5, start: int = ord(' '), end: int = ord('~')) -> str:
return "".join(chr(random.randrange(start, end + 1)) for i in range(length))
random_string()
def random_id(length: int = 2) -> str:
return random_string(start=ord('a'), end=ord('z'))
random_id()
def random_plain() -> str:
return random_string().replace('<', '').replace('>', '')
def random_string_noquotes() -> str:
return random_string().replace('"', '').replace("'", '')
def random_html(depth: int = 0) -> Tuple[str, str]:
prefix = random_plain()
tag = random_id()
if depth > 0:
html, plain = random_html(depth - 1)
else:
html = plain = random_plain()
attr = random_id()
value = '"' + random_string_noquotes() + '"'
postfix = random_plain()
return f'{prefix}<{tag} {attr}={value}>{html}</{tag}>{postfix}', \
prefix + plain + postfix
random_html()
def remove_html_testcase(expected: bool = True) -> Tuple[str, str]:
while True:
html, plain = random_html()
outcome = (remove_html_markup(html) == plain)
if outcome == expected:
return html, plain
REMOVE_HTML_TESTS = 100
REMOVE_HTML_PASSING_TESTCASES = \
[remove_html_testcase(True) for i in range(REMOVE_HTML_TESTS)]
REMOVE_HTML_FAILING_TESTCASES = \
[remove_html_testcase(False) for i in range(REMOVE_HTML_TESTS)]
###Output
_____no_output_____
###Markdown
End of Excursion Here is a passing test case:
###Code
REMOVE_HTML_PASSING_TESTCASES[0]
html, plain = REMOVE_HTML_PASSING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
Here is a failing test case (containing a double quote in the plain text)
###Code
REMOVE_HTML_FAILING_TESTCASES[0]
with ExpectError():
html, plain = REMOVE_HTML_FAILING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
Traceback (most recent call last):
File "<ipython-input-1-bfe5da826454>", line 3, in <module>
remove_html_markup_test(html, plain)
File "<ipython-input-1-cc247e8e35aa>", line 4, in remove_html_markup_test
f"Got {repr(outcome)}, expected {repr(plain)}"
AssertionError: Got '3AGe7!%H</qcguk>6azh_', expected '3AGe7"!%H6azh_' (expected)
###Markdown
We run our tests, collecting the outcomes in `html_debugger`.
###Code
html_debugger = OchiaiDebugger()
for html, plain in (REMOVE_HTML_PASSING_TESTCASES +
REMOVE_HTML_FAILING_TESTCASES):
with html_debugger:
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
The suspiciousness distribution will not be of much help here – pretty much all lines in `remove_html_markup()` have the same suspiciousness.
###Code
html_debugger
###Output
_____no_output_____
###Markdown
Let us create our repairer and run it.
###Code
html_repairer = Repairer(html_debugger, log=True)
best_tree, fitness = html_repairer.repair(iterations=20)
# docassert
assert fitness < 1.0
###Output
_____no_output_____
###Markdown
We see that the "best" code is still our original code, with no changes. And we can set `iterations` to 50, 100, 200... – our `Repairer` won't be able to repair it.
###Code
quiz("Why couldn't `Repairer()` repair `remove_html_markup()`?",
[
"The population is too small!",
"The suspiciousness is too evenly distributed!",
"We need more test cases!",
"We need more iterations!",
"There is no statement in the source with a correct condition!",
"The population is too big!",
], '5242880 >> 20')
###Output
_____no_output_____
###Markdown
You can explore all of the hypotheses above by changing the appropriate parameters, but you won't be able to change the outcome. The problem is that, unlike `middle()`, there is no statement (or combination thereof) in `remove_html_markup()` that could be used to make the failure go away. For this, we need to mutate another aspect of the code, which we will explore in the next section. Mutating ConditionsThe `Repairer` class is very configurable. The individual steps in automated repair can all be replaced by providing own classes in the keyword arguments of its `__init__()` constructor:* To change fault localization, pass a different `debugger` that is a subclass of `RankingDebugger`.* To change the mutation operator, set `mutator_class` to a subclass of `StatementMutator`.* To change the crossover operator, set `crossover_class` to a subclass of `CrossoverOperator`.* To change the reduction algorithm, set `reducer_class` to a subclass of `Reducer`.In this section, we will explore how to extend the mutation operator such that it can mutate _conditions_ for control constructs such as `if`, `while`, or `for`. To this end, we introduce a new class `ConditionMutator` subclassing `StatementMutator`. Collecting ConditionsLet us start with a few simple supporting functions. The function `all_conditions()` retrieves all control conditions from an AST.
###Code
def all_conditions(trees: Union[ast.AST, List[ast.AST]],
tp: Optional[Type] = None) -> List[ast.expr]:
"""
Return all conditions from the AST (or AST list) `trees`.
If `tp` is given, return only elements of that type.
"""
if not isinstance(trees, list):
assert isinstance(trees, ast.AST)
trees = [trees]
visitor = ConditionVisitor()
for tree in trees:
visitor.visit(tree)
conditions = visitor.conditions
if tp is not None:
conditions = [c for c in conditions if isinstance(c, tp)]
return conditions
###Output
_____no_output_____
###Markdown
`all_conditions()` uses a `ConditionVisitor` class to walk the tree and collect the conditions:
###Code
class ConditionVisitor(NodeVisitor):
def __init__(self) -> None:
self.conditions: List[ast.expr] = []
self.conditions_seen: Set[str] = set()
super().__init__()
def add_conditions(self, node: ast.AST, attr: str) -> None:
elems = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems]
elems = cast(List[ast.expr], elems)
for elem in elems:
elem_str = astor.to_source(elem)
if elem_str not in self.conditions_seen:
self.conditions.append(elem)
self.conditions_seen.add(elem_str)
def visit_BoolOp(self, node: ast.BoolOp) -> ast.AST:
self.add_conditions(node, 'values')
return super().generic_visit(node)
def visit_UnaryOp(self, node: ast.UnaryOp) -> ast.AST:
if isinstance(node.op, ast.Not):
self.add_conditions(node, 'operand')
return super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> ast.AST:
if hasattr(node, 'test'):
self.add_conditions(node, 'test')
return super().generic_visit(node)
###Output
_____no_output_____
###Markdown
Here are all the conditions in `remove_html_markup()`. This is some material to construct new conditions from.
###Code
[astor.to_source(cond).strip()
for cond in all_conditions(remove_html_markup_tree())]
###Output
_____no_output_____
###Markdown
Mutating ConditionsHere comes our `ConditionMutator` class. We subclass from `StatementMutator` and set an attribute `self.conditions` containing all the conditions in the source. The method `choose_condition()` randomly picks a condition.
###Code
class ConditionMutator(StatementMutator):
"""Mutate conditions in an AST"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are as with `StatementMutator` constructor."""
super().__init__(*args, **kwargs)
self.conditions = all_conditions(self.source)
if self.log:
print("Found conditions",
[astor.to_source(cond).strip()
for cond in self.conditions])
def choose_condition(self) -> ast.expr:
"""Return a random condition from source."""
return copy.deepcopy(random.choice(self.conditions))
###Output
_____no_output_____
###Markdown
The actual mutation takes place in the `swap()` method. If the node to be replaced has a `test` attribute (i.e. a controlling predicate), then we pick a random condition `cond` from the source and randomly chose from:* **set**: We change `test` to `cond`.* **not**: We invert `test`.* **and**: We replace `test` by `cond and test`.* **or**: We replace `test` by `cond or test`.Over time, this might lead to operators propagating across the population.
###Code
class ConditionMutator(ConditionMutator):
def choose_bool_op(self) -> str:
return random.choice(['set', 'not', 'and', 'or'])
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` condition by a condition from `source`"""
if not hasattr(node, 'test'):
return super().swap(node)
node = cast(ast.If, node)
cond = self.choose_condition()
new_test = None
choice = self.choose_bool_op()
if choice == 'set':
new_test = cond
elif choice == 'not':
new_test = ast.UnaryOp(op=ast.Not(), operand=node.test)
elif choice == 'and':
new_test = ast.BoolOp(op=ast.And(), values=[cond, node.test])
elif choice == 'or':
new_test = ast.BoolOp(op=ast.Or(), values=[cond, node.test])
else:
raise ValueError("Unknown boolean operand")
if new_test:
# ast.copy_location(new_test, node)
node.test = new_test
return node
###Output
_____no_output_____
###Markdown
We can use the mutator just like `StatementMutator`, except that some of the mutations will also include new conditions:
###Code
mutator = ConditionMutator(source=all_statements(remove_html_markup_tree()),
log=True)
for i in range(10):
new_tree = mutator.mutate(remove_html_markup_tree())
###Output
9:insert: "if c == '>' and not ..." becomes "if c == '>' and not ..."; 'quote = not quote'
3:insert: 'quote = False' becomes 'quote = False'; 'out = out + c'
8:insert: 'tag = True' becomes 'if c == \'"\' or c == ...'
12:insert: 'quote = not quote' becomes 'quote = not quote'; 'tag = True'
10:delete: 'tag = False' becomes 'pass'
12:insert: 'quote = not quote' becomes "if c == '>' and not ..."
3:insert: 'quote = False' becomes 'quote = False'; "out = ''"
14:swap: 'out = out + c' becomes 'quote = False'
12:insert: 'quote = not quote' becomes 'for c in s: quote = ...'
3:delete: 'quote = False' becomes 'pass'
###Markdown
Let us put our new mutator to action, again in a `Repairer()`. To activate it, all we need to do is to pass it as `mutator_class` keyword argument.
###Code
condition_repairer = Repairer(html_debugger,
mutator_class=ConditionMutator,
log=2)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag:
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
We might need more iterations for this one. Let us see...
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
repaired_source = astor.to_source(best_tree)
print_content(repaired_source, '.py')
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Success again! We have automatically repaired `remove_html_markup()` – the resulting code passes all tests, including those that were previously failing. Again, we can present the fix as a patch:
###Code
original_source = astor.to_source(remove_html_markup_tree())
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m206[39;49;00m,[34m51[39;49;00m +[34m206[39;49;00m,[34m39[39;49;00m @@
lse
- [34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag:
+ [34melif[39;49;00m tag [35mand[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
###Markdown
However, looking at the patch, one may come up with doubts.
###Code
quiz("Is this actually the best solution?",
[
"Yes, sure, of course. Why?",
"Err - what happened to single quotes?"
], 1 << 1)
###Output
_____no_output_____
###Markdown
Indeed – our solution does not seem to handle single quotes anymore. Why is that so?
###Code
quiz("Why aren't single quotes handled in the solution?",
[
"Because they're not important. "
"I mean, y'know, who uses 'em anyway?",
"Because they are not part of our tests? "
"Let me look up how they are constructed..."
], 1 << 1)
###Output
_____no_output_____
###Markdown
Correct! Our test cases do not include single quotes – at least not in the interior of HTML tags – and thus, automatic repair did not care to preserve their handling. How can we fix this? An easy way is to include an appropriate test case in our set – a test case that passes with the original `remove_html_markup()`, yet fails with the "repaired" `remove_html_markup()` as whosn above.
###Code
with html_debugger:
remove_html_markup_test("<foo quote='>abc'>me</foo>", "me")
###Output
_____no_output_____
###Markdown
Let us repeat the repair with the extended test set:
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
###Output
Evolving population: iteration 64/200 fitness = 0.99
New best code (fitness = 0.99):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
Evolving population: iteration 116/200 fitness = 0.99
New best code (fitness = 0.99):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
Evolving population: iteration 139/200 fitness = 1.0
New best code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
Reduced code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
Here is the final tree:
###Code
print_content(astor.to_source(best_tree), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
And here is its fitness:
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
The revised candidate now passes _all_ tests (including the tricky quote test we added last). Its condition now properly checks for `tag` _and_ both quotes. (The `tag` inside the parentheses is still redundant, but so be it.) From this example, we can learn a few lessons about the possibilities and risks of automated repair:* First, automatic repair is highly dependent on the quality of the checking tests. The risk is that the repair may overspecialize towards the test.* Second, when based on "plastic surgery", automated repair is highly dependent on the sources that program fragments are chosen from. If there is a hint of a solution somewhere in the code, there is a chance that automated repair will catch it up.* Third, automatic repair is a deeply heuristic approach. Its behavior will vary widely with any change to the parameters (and the underlying random number generators).* Fourth, automatic repair can take a long time. The examples we have in this chapter take less than a minute to compute, and neither Python nor our implementation is exactly fast. But as the search space grows, automated repair will take much longer.On the other hand, even an incomplete automated repair candidate can be much better than nothing at all – it may provide all the essential ingredients (such as the location or the involved variables) for a successful fix. When users of automated repair techniques are aware of its limitations and its assumptions, there is lots of potential in automated repair. Enjoy! Limitations The `Repairer` class is tested on our example programs, but not much more. Things that do not work include* Functions with inner functions are not repaired. Synopsis This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception. The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythonimport astortree, fitness = repairer.repair()print(astor.to_source(tree), fitness)``` Here is a complete example for the `middle()` program. This is the original source code of `middle()`:
###Code
# ignore
print_content(middle_source, '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melse[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:
###Code
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
The repairer is instantiated with the debugger used (`middle_debugger`):
###Code
middle_repairer = Repairer(middle_debugger)
###Output
_____no_output_____
###Markdown
The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).
###Code
tree, fitness = middle_repairer.repair()
###Output
_____no_output_____
###Markdown
The returned AST `tree` can be output via `astor.to_source()`:
###Code
print(astor.to_source(tree))
###Output
def middle(x, y, z):
if y < z:
if x < z:
if x < y:
return y
else:
return x
elif x > y:
return y
elif x > z:
return x
return z
###Markdown
The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful. Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([Repairer, ConditionMutator, CrossoverOperator],
abstract_classes=[
NodeVisitor,
NodeTransformer
],
public_methods=[
Repairer.__init__,
Repairer.repair,
StatementMutator.__init__,
StatementMutator.mutate,
ConditionMutator.__init__,
CrossoverOperator.__init__,
CrossoverOperator.crossover,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Automated repair based on genetic optimization uses five ingredients: 1. A _test suite_ to determine passing and failing tests 2. _Defect localization_ (typically obtained from [statistical debugging](StatisticalDebugger.ipynb) with the test suite) to determine potential locations to be fixed 3. _Random code mutations_ and _crossover operations_ to create and evolve a population of inputs 4. A _fitness function_ and a _selection strategy_ to determine the part of the population that should be evolved further 5. A _reducer_ such as [delta debugging](DeltaDebugger.ipynb) to simplify the final candidate with the highest fitness.* The result of automated repair is a _fix candidate_ with the highest fitness for the given tests.* A _fix candidate_ is not guaranteed to be correct or optimal, but gives important hints on how to fix the program.* All of the above ingredients offer plenty of settings and alternatives to experiment with. BackgroundThe seminal work in automated repair is [GenProg](https://squareslab.github.io/genprog-code/) \cite{LeGoues2012}, which heavily inspired our `Repairer` implementation. Major differences between GenProg and `Repairer` include:* GenProg includes its own defect localization (which is also dynamically updated), whereas `Repairer` builds on earlier statistical debugging.* GenProg can apply multiple mutations on programs (or none at all), whereas `Repairer` applies exactly one mutation.* The `StatementMutator` used by `Repairer` includes various special cases for program structures (`if`, `for`, `while`...), whereas GenProg operates on statements only.* GenProg has been tested on large production programs.While GenProg is _the_ seminal work in the area (and arguably the most important software engineering research contribution of the 2010s), there have been a number of important extensions of automated repair. These include:* *AutoFix* \cite{Pei2014} leverages _program contracts_ (pre- and postconditions) to generate tests and assertions automatically. Not only do such [assertions](Assertions.ipynb) help in fault localization, they also allow for much better validation of fix candidates.* *SemFix* \cite{Nguyen2013} and its successor *[Angelix](http://angelix.io)* \cite{Mechtaev2016}introduce automated program repair based on _symbolic analysis_ rather than genetic optimization. This allows to leverage program semantics, which GenProg does not consider.To learn more about automated program repair, see [program-repair.org](http://program-repair.org), the community page dedicated to research in program repair. Exercises Exercise 1: Automated Repair ParametersAutomated Repair is influenced by a large number of design choices – the size of the population, the number of iterations, the genetic optimization strategy, and more. How do changes to these design choices affect its effectiveness? * Consider the constants defined in this chapter (such as `POPULATION_SIZE` or `WEIGHT_PASSING` vs. `WEIGHT_FAILING`). How do changes affect the effectiveness of automated repair?* As an effectiveness metric, consider the number of iterations it takes to produce a fix candidate.* Since genetic optimization is a random algorithm, you need to determine effectiveness averages over a large number of runs (say, 100). Exercise 2: Elitism[_Elitism_](https://en.wikipedia.org/wiki/Genetic_algorithmElitism) (also known as _elitist selection_) is a variant of genetic selection in which a small fraction of the fittest candidates of the last population are included unchanged in the offspring.* Implement elitist selection by subclassing the `evolve()` method. Experiment with various fractions (5%, 10%, 25%) of "elites" and see how this improves results. Exercise 3: Evolving ValuesFollowing the steps of `ConditionMutator`, implement a `ValueMutator` class that replaces one constant value by another one found in the source (say, `0` by `1` or `True` by `False`).For validation, consider the following failure in the `square_root()` function from the [chapter on assertions](Assertions.ipynb):
###Code
from Assertions import square_root # minor dependency
with ExpectError():
square_root_of_zero = square_root(0)
###Output
Traceback (most recent call last):
File "<ipython-input-1-751aff5e3a1c>", line 2, in <module>
square_root_of_zero = square_root(0)
File "Assertions.ipynb", line 61, in square_root
guess = (approx + x / approx) / 2
ZeroDivisionError: float division by zero (expected)
###Markdown
Can your `ValueMutator` automatically fix this failure? **Solution.** Your solution will be effective if it also includes named constants such as `None`.
###Code
import math
def square_root_fixed(x): # type: ignore
assert x >= 0 # precondition
approx = 0 # <-- FIX: Change `None` to 0
guess = x / 2
while approx != guess:
approx = guess
guess = (approx + x / approx) / 2
assert math.isclose(approx * approx, x)
return approx
square_root_fixed(0)
###Output
_____no_output_____
###Markdown
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("UJTf7cW0idI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in the [chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb).
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer is instantiated with the debugger used (`middle_debugger`):```python>>> middle_repairer = Repairer(middle_debugger)```The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).```python>>> tree, fitness = middle_repairer.repair()```The returned AST `tree` can be output via `ast.unparse()`:```python>>> print(ast.unparse(tree))def middle(x, y, z): if y < z: if x < y: return y elif x < z: return x elif x > y: return y elif x > z: return x return z```The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.```python>>> fitness1.0```Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful.Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e., how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:
###Code
from StatisticalDebugger import middle
# ignore
from bookutils import print_content
# ignore
import inspect
# ignore
_, first_lineno = inspect.getsourcelines(middle)
middle_source = inspect.getsource(middle)
print_content(middle_source, '.py', start_line_number=first_lineno)
###Output
708 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
709 [34mif[39;49;00m y < z:
710 [34mif[39;49;00m x < y:
711 [34mreturn[39;49;00m y
712 [34melif[39;49;00m x < z:
713 [34mreturn[39;49;00m y
714 [34melse[39;49;00m:
715 [34mif[39;49;00m x > y:
716 [34mreturn[39;49;00m y
717 [34melif[39;49;00m x > z:
718 [34mreturn[39;49;00m x
719 [34mreturn[39;49;00m z
###Markdown
In most cases, `middle()` just runs fine:
###Code
middle(4, 5, 6)
###Output
_____no_output_____
###Markdown
In some other cases, though, it does not work correctly:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:
###Code
def middle_sort_of_fixed(x, y, z): # type: ignore
return x
###Output
_____no_output_____
###Markdown
You will concur that the failure no longer occurs:
###Code
middle_sort_of_fixed(2, 1, 3)
###Output
_____no_output_____
###Markdown
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The common approach for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks. Note that running the test suite commonly takes the most time of automated repair, so a large test suite also comes with extra cost. Let us first focus on achieving high-quality repairs. Hence, we will use the extensive test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):
###Code
from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES
###Output
_____no_output_____
###Markdown
The `middle_test()` function fails whenever `middle()` returns an incorrect result:
###Code
def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
from ExpectError import ExpectError
with ExpectError():
middle_test(2, 1, 3)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_52227/3661663124.py", line 2, in <module>
middle_test(2, 1, 3)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_52227/40742806.py", line 3, in middle_test
assert m == sorted([x, y, z])[1]
AssertionError (expected)
###Markdown
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
###Code
from StatisticalDebugger import OchiaiDebugger, RankingDebugger
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
We see that the upper half of the `middle()` code is definitely more suspicious:
###Code
middle_debugger
###Output
_____no_output_____
###Markdown
The most suspicious line is:
###Code
# ignore
location = middle_debugger.rank()[0]
(func_name, lineno) = location
lines, first_lineno = inspect.getsourcelines(middle)
print(lineno, end="")
print_content(lines[lineno - first_lineno], '.py')
###Output
713 [34mreturn[39;49;00m y
###Markdown
with a suspiciousness of:
###Code
# ignore
middle_debugger.suspiciousness(location)
###Output
_____no_output_____
###Markdown
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations:
###Code
import string
string.ascii_letters
len(string.ascii_letters + '_') * \
len(string.ascii_letters + '_' + string.digits) * \
len(string.ascii_letters + '_' + string.digits)
###Output
_____no_output_____
###Markdown
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. This insight has been dubbed the *plastic surgery hypothesis*: content of new code can often be assembled out of fragments of code that already exist in the code base \citeBarr2014}. For our "plastic surgery", we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function.
###Code
import ast
import inspect
from bookutils import print_content, show_ast
def middle_tree() -> ast.AST:
return ast.parse(inspect.getsource(middle))
show_ast(middle_tree())
###Output
_____no_output_____
###Markdown
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST.
###Code
print(ast.dump(middle_tree()))
###Output
Module(body=[FunctionDef(name='middle', args=arguments(posonlyargs=[], args=[arg(arg='x'), arg(arg='y'), arg(arg='z')], kwonlyargs=[], kw_defaults=[], defaults=[]), body=[If(test=Compare(left=Name(id='y', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Lt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[])])], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='y', ctx=Load())]), body=[Return(value=Name(id='y', ctx=Load()))], orelse=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Name(id='z', ctx=Load())]), body=[Return(value=Name(id='x', ctx=Load()))], orelse=[])])]), Return(value=Name(id='z', ctx=Load()))], decorator_list=[])], type_ignores=[])
###Markdown
This is the path to the first `return` statement:
###Code
ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore
###Output
_____no_output_____
###Markdown
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast).
###Code
from ast import NodeVisitor
# ignore
from typing import Any, Callable, Optional, Type, Tuple
from typing import Dict, Union, Set, List, cast
class StatementVisitor(NodeVisitor):
"""Visit all statements within function defs in an AST"""
def __init__(self) -> None:
self.statements: List[Tuple[ast.AST, str]] = []
self.func_name = ""
self.statements_seen: Set[Tuple[ast.AST, str]] = set()
super().__init__()
def add_statements(self, node: ast.AST, attr: str) -> None:
elems: List[ast.AST] = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems] # type: ignore
for elem in elems:
stmt = (elem, self.func_name)
if stmt in self.statements_seen:
continue
self.statements.append(stmt)
self.statements_seen.add(stmt)
def visit_node(self, node: ast.AST) -> None:
# Any node other than the ones listed below
self.add_statements(node, 'body')
self.add_statements(node, 'orelse')
def visit_Module(self, node: ast.Module) -> None:
# Module children are defs, classes and globals - don't add
super().generic_visit(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
# Class children are defs and globals - don't add
super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> None:
self.visit_node(node)
super().generic_visit(node)
def visit_FunctionDef(self,
node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None:
if not self.func_name:
self.func_name = node.name
self.visit_node(node)
super().generic_visit(node)
self.func_name = ""
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
return self.visit_FunctionDef(node)
###Output
_____no_output_____
###Markdown
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class.
###Code
def all_statements_and_functions(tree: ast.AST,
tp: Optional[Type] = None) -> \
List[Tuple[ast.AST, str]]:
"""
Return a list of pairs (`statement`, `function`) for all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
visitor = StatementVisitor()
visitor.visit(tree)
statements = visitor.statements
if tp is not None:
statements = [s for s in statements if isinstance(s[0], tp)]
return statements
def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]:
"""
Return a list of all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]
###Output
_____no_output_____
###Markdown
Here are all the `return` statements in `middle()`:
###Code
all_statements(middle_tree(), ast.Return)
all_statements_and_functions(middle_tree(), ast.If)
###Output
_____no_output_____
###Markdown
We can randomly pick an element:
###Code
import random
random_node = random.choice(all_statements(middle_tree()))
ast.unparse(random_node)
###Output
_____no_output_____
###Markdown
Mutating StatementsThe main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). The constructor provides various keyword arguments to configure the mutator.
###Code
from ast import NodeTransformer
import copy
class StatementMutator(NodeTransformer):
"""Mutate statements in an AST for automated repair."""
def __init__(self,
suspiciousness_func:
Optional[Callable[[Tuple[Callable, int]], float]] = None,
source: Optional[List[ast.AST]] = None,
log: bool = False) -> None:
"""
Constructor.
`suspiciousness_func` is a function that takes a location
(function, line_number) and returns a suspiciousness value
between 0 and 1.0. If not given, all locations get the same
suspiciousness of 1.0.
`source` is a list of statements to choose from.
"""
super().__init__()
self.log = log
if suspiciousness_func is None:
def suspiciousness_func(location: Tuple[Callable, int]) -> float:
return 1.0
assert suspiciousness_func is not None
self.suspiciousness_func: Callable = suspiciousness_func
if source is None:
source = []
self.source = source
if self.log > 1:
for i, node in enumerate(self.source):
print(f"Source for repairs #{i}:")
print_content(ast.unparse(node), '.py')
print()
print()
self.mutations = 0
###Output
_____no_output_____
###Markdown
Choosing Suspicious Statements to MutateWe start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization.
###Code
import warnings
class StatementMutator(StatementMutator):
def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float:
if not hasattr(stmt, 'lineno'):
warnings.warn(f"{self.format_node(stmt)}: Expected line number")
return 0.0
suspiciousness = self.suspiciousness_func((func_name, stmt.lineno))
if suspiciousness is None: # not executed
return 0.0
return suspiciousness
def format_node(self, node: ast.AST) -> str:
...
###Output
_____no_output_____
###Markdown
The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen.
###Code
class StatementMutator(StatementMutator):
def node_to_be_mutated(self, tree: ast.AST) -> ast.AST:
statements = all_statements_and_functions(tree)
assert len(statements) > 0, "No statements"
weights = [self.node_suspiciousness(stmt, func_name)
for stmt, func_name in statements]
stmts = [stmt for stmt, func_name in statements]
if self.log > 1:
print("Weights:")
for i, stmt in enumerate(statements):
node, func_name = stmt
print(f"{weights[i]:.2} {self.format_node(node)}")
if sum(weights) == 0.0:
# No suspicious line
return random.choice(stmts)
else:
return random.choices(stmts, weights=weights)[0]
###Output
_____no_output_____
###Markdown
Choosing a Mutation Method The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node.According to the rules of `NodeTransformer`, the mutation method can return* a new node or a list of nodes, replacing the current node;* `None`, deleting it; or* the node itself, keeping things as they are.
###Code
import re
RE_SPACE = re.compile(r'[ \t\n]+')
class StatementMutator(StatementMutator):
def choose_op(self) -> Callable:
return random.choice([self.insert, self.swap, self.delete])
def visit(self, node: ast.AST) -> ast.AST:
super().visit(node) # Visits (and transforms?) children
if not node.mutate_me: # type: ignore
return node
op = self.choose_op()
new_node = op(node)
self.mutations += 1
if self.log:
print(f"{node.lineno:4}:{op.__name__ + ':':7} "
f"{self.format_node(node)} "
f"becomes {self.format_node(new_node)}")
return new_node
###Output
_____no_output_____
###Markdown
Swapping StatementsOur first mutator is `swap()`, which replaces the current node `NODE` by a random node found in `source` (using a newly defined `choose_statement()`).As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```pythonif P: BODY```we thus only insert ```pythonif P: pass```since the statements in `BODY` have a later chance to get inserted. The same holds for all constructs that have a `BODY`, i.e. `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def choose_statement(self) -> ast.AST:
return copy.deepcopy(random.choice(self.source))
class StatementMutator(StatementMutator):
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` with a random node from `source`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt):
# The source `if P: X` is added as `if P: pass`
if hasattr(new_node, 'body'):
new_node.body = [ast.Pass()] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Inserting StatementsOur next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node `NODE`. (If `NODE` is a `return` statement, then we insert the new node _before_ `NODE`.)If the statement to be inserted has the form```pythonif P: BODY```we only insert the "header" of the `if`, resulting in```pythonif P: NODE```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more.
###Code
class StatementMutator(StatementMutator):
def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]:
"""Insert a random node from `source` after `node`"""
new_node = self.choose_statement()
if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'):
# Inserting `if P: X` as `if P:`
new_node.body = [node] # type: ignore
if hasattr(new_node, 'orelse'):
new_node.orelse = [] # type: ignore
if hasattr(new_node, 'finalbody'):
new_node.finalbody = [] # type: ignore
# ast.copy_location(new_node, node)
return new_node
# Only insert before `return`, not after it
if isinstance(node, ast.Return):
if isinstance(new_node, ast.Return):
return new_node
else:
return [new_node, node]
return [node, new_node]
###Output
_____no_output_____
###Markdown
Deleting StatementsOur last mutator is `delete()`, which deletes the current node `NODE`. The standard case is to replace `NODE` by a `pass` statement.If the statement to be deleted has the form```pythonif P: BODY```we only delete the "header" of the `if`, resulting in```pythonBODY```Again, this applies to all constructs that have a `BODY`, i.e., `while`, `for`, `try`, `with`, and more. If the statement to be deleted has multiple branches, a random branch is chosen (e.g., the `else` branch of an `if` statement).
###Code
class StatementMutator(StatementMutator):
def delete(self, node: ast.AST) -> None:
"""Delete `node`."""
branches = [attr for attr in ['body', 'orelse', 'finalbody']
if hasattr(node, attr) and getattr(node, attr)]
if branches:
# Replace `if P: S` by `S`
branch = random.choice(branches)
new_node = getattr(node, branch)
return new_node
if isinstance(node, ast.stmt):
# Avoid empty bodies; make this a `pass` statement
new_node = ast.Pass()
ast.copy_location(new_node, node)
return new_node
return None # Just delete
from bookutils import quiz
quiz("Why are statements replaced by `pass` rather than deleted?",
[
"Because `if P: pass` is valid Python, while `if P:` is not",
"Because in Python, bodies for `if`, `while`, etc. cannot be empty",
"Because a `pass` node makes a target for future mutations",
"Because it causes the tests to pass"
], '[3 ^ n for n in range(3)]')
###Output
_____no_output_____
###Markdown
Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further. HelpersFor logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node.
###Code
class StatementMutator(StatementMutator):
NODE_MAX_LENGTH = 20
def format_node(self, node: ast.AST) -> str:
"""Return a string representation for `node`."""
if node is None:
return "None"
if isinstance(node, list):
return "; ".join(self.format_node(elem) for elem in node)
s = RE_SPACE.sub(' ', ast.unparse(node)).strip()
if len(s) > self.NODE_MAX_LENGTH - len("..."):
s = s[:self.NODE_MAX_LENGTH] + "..."
return repr(s)
###Output
_____no_output_____
###Markdown
All TogetherLet us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation.
###Code
class StatementMutator(StatementMutator):
def mutate(self, tree: ast.AST) -> ast.AST:
"""Mutate the given AST `tree` in place. Return mutated tree."""
assert isinstance(tree, ast.AST)
tree = copy.deepcopy(tree)
if not self.source:
self.source = all_statements(tree)
for node in ast.walk(tree):
node.mutate_me = False # type: ignore
node = self.node_to_be_mutated(tree)
node.mutate_me = True # type: ignore
self.mutations = 0
tree = self.visit(tree)
if self.mutations == 0:
warnings.warn("No mutations found")
ast.fix_missing_locations(tree)
return tree
###Output
_____no_output_____
###Markdown
Here are a number of transformations applied by `StatementMutator`:
###Code
mutator = StatementMutator(log=True)
for i in range(10):
new_tree = mutator.mutate(middle_tree())
###Output
9:insert: 'return y' becomes 'return y'
8:insert: 'if x > y: return y e...' becomes 'if x < y: if x > y: ...'
12:insert: 'return z' becomes 'if y < z: return z...'
3:swap: 'if x < y: return y e...' becomes 'return x'
3:swap: 'if x < y: return y e...' becomes 'return z'
3:swap: 'if x < y: return y e...' becomes 'return x'
11:swap: 'return x' becomes 'return y'
10:insert: 'if x > z: return x...' becomes 'if x > z: return x...'; 'return z'
12:delete: 'return z' becomes 'pass'
8:swap: 'if x > y: return y e...' becomes 'if y < z: pass'
###Markdown
This is the effect of the last mutator applied on `middle`:
###Code
print_content(ast.unparse(new_tree), '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m y < z:
[34mpass[39;49;00m
[34mreturn[39;49;00m z
###Markdown
FitnessNow that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate. Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests.
###Code
WEIGHT_PASSING = 0.99
WEIGHT_FAILING = 0.01
def middle_fitness(tree: ast.AST) -> float:
"""Compute fitness of a `middle()` candidate given in `tree`"""
original_middle = middle
try:
code = compile(tree, '<fitness>', 'exec')
except ValueError:
return 0 # Compilation error
exec(code, globals())
passing_passed = 0
failing_passed = 0
# Test how many of the passing runs pass
for x, y, z in MIDDLE_PASSING_TESTCASES:
try:
middle_test(x, y, z)
passing_passed += 1
except AssertionError:
pass
passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES)
# Test how many of the failing runs pass
for x, y, z in MIDDLE_FAILING_TESTCASES:
try:
middle_test(x, y, z)
failing_passed += 1
except AssertionError:
pass
failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES)
fitness = (WEIGHT_PASSING * passing_ratio +
WEIGHT_FAILING * failing_ratio)
globals()['middle'] = original_middle
return fitness
###Output
_____no_output_____
###Markdown
Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones).
###Code
middle_fitness(middle_tree())
###Output
_____no_output_____
###Markdown
Our "sort of fixed" version of `middle()` gets a much lower fitness:
###Code
middle_fitness(ast.parse("def middle(x, y, z): return x"))
###Output
_____no_output_____
###Markdown
In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)
###Code
from StatisticalDebugger import middle_fixed
middle_fixed_source = \
inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip()
middle_fitness(ast.parse(middle_fixed_source))
###Output
_____no_output_____
###Markdown
PopulationWe now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012}).
###Code
POPULATION_SIZE = 40
middle_mutator = StatementMutator()
MIDDLE_POPULATION = [middle_tree()] + \
[middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]
###Output
_____no_output_____
###Markdown
We sort the fix candidates according to their fitness. This actually runs all tests on all candidates.
###Code
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
###Output
_____no_output_____
###Markdown
The candidate with the highest fitness is still our original (faulty) `middle()` code:
###Code
print(ast.unparse(MIDDLE_POPULATION[0]),
middle_fitness(MIDDLE_POPULATION[0]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
elif x > y:
return y
elif x > z:
return x
return z 0.99
###Markdown
At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:
###Code
print(ast.unparse(MIDDLE_POPULATION[-1]),
middle_fitness(MIDDLE_POPULATION[-1]))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return y
else:
return y
return z 0.5445
###Markdown
EvolutionTo evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates.
###Code
def evolve_middle() -> None:
global MIDDLE_POPULATION
source = all_statements(middle_tree())
mutator = StatementMutator(source=source)
n = len(MIDDLE_POPULATION)
offspring: List[ast.AST] = []
while len(offspring) < n:
parent = random.choice(MIDDLE_POPULATION)
offspring.append(mutator.mutate(parent))
MIDDLE_POPULATION += offspring
MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)
MIDDLE_POPULATION = MIDDLE_POPULATION[:n]
###Output
_____no_output_____
###Markdown
This is what happens when evolving our population for the first time; the original source is still our best candidate.
###Code
evolve_middle()
tree = MIDDLE_POPULATION[0]
print(ast.unparse(tree), middle_fitness(tree))
# docassert
assert middle_fitness(tree) < 1.0
###Output
_____no_output_____
###Markdown
However, nothing keeps us from evolving for a few generations more...
###Code
for i in range(50):
evolve_middle()
best_middle_tree = MIDDLE_POPULATION[0]
fitness = middle_fitness(best_middle_tree)
print(f"\rIteration {i:2}: fitness = {fitness} ", end="")
if fitness >= 1.0:
break
# docassert
assert middle_fitness(best_middle_tree) >= 1.0
###Output
_____no_output_____
###Markdown
Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:
###Code
print_content(ast.unparse(best_middle_tree), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mif[39;49;00m x < z:
5 [34mreturn[39;49;00m y
6 [34melif[39;49;00m x < z:
7 [34mreturn[39;49;00m x
8 [34melif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melse[39;49;00m:
11 [34mif[39;49;00m x > z:
12 [34mreturn[39;49;00m x
13 [34mreturn[39;49;00m z
14 [34mreturn[39;49;00m z
###Markdown
... and yes, it passes all tests:
###Code
original_middle = middle
code = compile(best_middle_tree, '<string>', 'exec')
exec(code, globals())
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
middle_test(x, y, z)
middle = original_middle
###Output
_____no_output_____
###Markdown
As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix. However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements.
###Code
quiz("Some of the lines in our fix candidate are redundant. "
"Which are these?",
[
"Line 3: `if x < y:`",
"Line 4: `if x < z:`",
"Line 5: `return y`",
"Line 13: `return z`"
], '[eval(chr(100 - x)) for x in [48, 50]]')
###Output
_____no_output_____
###Markdown
Simplifying As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements. The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
from DeltaDebugger import DeltaDebugger
middle_lines = ast.unparse(best_middle_tree).strip().split('\n')
def test_middle_lines(lines: List[str]) -> None:
source = "\n".join(lines)
tree = ast.parse(source)
assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0
with DeltaDebugger() as dd:
test_middle_lines(middle_lines)
reduced_lines = dd.min_args()['lines']
reduced_source = "\n".join(reduced_lines)
repaired_source = ast.unparse(ast.parse(reduced_source)) # normalize
print_content(repaired_source, '.py')
# docassert
assert len(reduced_lines) < len(middle_lines)
###Output
_____no_output_____
###Markdown
Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:
###Code
original_source = ast.unparse(ast.parse(middle_source)) # normalize
from ChangeDebugger import diff, print_patch # minor dependency
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m87[39;49;00m,[34m37[39;49;00m +[34m87[39;49;00m,[34m37[39;49;00m @@
x < z:
- [34mreturn[39;49;00m y
+ [34mreturn[39;49;00m x
[34melif[39;49;00m
###Markdown
We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code. CrossoverSo far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" children, we pick a _crossover point_ and exchange the strands at this very point: We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as```pythoncrossover = CrossoverOperator()crossover.crossover(tree_p1, tree_p2)```where `tree_p1` and `tree_p2` are two ASTs that are changed in place. Excursion: Implementing Crossover Crossing Statement Lists Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:
###Code
def p1(): # type: ignore
a = 1
b = 2
c = 3
def p2(): # type: ignore
x = 1
y = 2
z = 3
###Output
_____no_output_____
###Markdown
Then a crossover operation would produce one child with a body```pythona = 1y = 2z = 3```and another child with a body```pythonx = 1b = 2c = 3``` We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`.
###Code
class CrossoverOperator:
"""A class for performing statement crossover of Python programs"""
def __init__(self, log: bool = False):
"""Constructor. If `log` is set, turn on logging."""
self.log = log
def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \
Tuple[List[ast.AST], List[ast.AST]]:
"""Crossover the statement lists `body_1` x `body_2`. Return new lists."""
assert isinstance(body_1, list)
assert isinstance(body_2, list)
crossover_point_1 = len(body_1) // 2
crossover_point_2 = len(body_2) // 2
return (body_1[:crossover_point_1] + body_2[crossover_point_2:],
body_2[:crossover_point_2] + body_1[crossover_point_1:])
###Output
_____no_output_____
###Markdown
Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:
###Code
tree_p1: ast.Module = ast.parse(inspect.getsource(p1))
tree_p2: ast.Module = ast.parse(inspect.getsource(p2))
body_p1 = tree_p1.body[0].body # type: ignore
body_p2 = tree_p2.body[0].body # type: ignore
body_p1
crosser = CrossoverOperator()
tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
x = [34m1[39;49;00m
b = [34m2[39;49;00m
c = [34m3[39;49;00m
###Markdown
Applying Crossover on ProgramsApplying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program functionality, other than introducing errors due to dependencies.
###Code
class CrossoverOperator(CrossoverOperator):
# In modules and class defs, the ordering of elements does not matter (much)
SKIP_LIST = {ast.Module, ast.ClassDef}
def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool:
if any(isinstance(tree, cls) for cls in self.SKIP_LIST):
return False
body = getattr(tree, body_attr, [])
return body and len(body) >= 2
###Output
_____no_output_____
###Markdown
Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes two ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.`) and $l_2$ (from `t2.`).If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise* If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$.* Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs.`crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool:
"""
Crossover the bodies `body_attr` of two trees `t1` and `t2`.
Return True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
assert isinstance(body_attr, str)
if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None):
return False
if self.crossover_branches(t1, t2):
return True
if self.log > 1:
print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}")
body_1 = getattr(t1, body_attr)
body_2 = getattr(t2, body_attr)
# If both trees have the attribute, we can cross their bodies
if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr):
if self.log:
print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}")
new_body_1, new_body_2 = self.cross_bodies(body_1, body_2)
setattr(t1, body_attr, new_body_1)
setattr(t2, body_attr, new_body_2)
return True
# Strategy 1: Find matches in class/function of same name
for child_1 in body_1:
if hasattr(child_1, 'name'):
for child_2 in body_2:
if (hasattr(child_2, 'name') and
child_1.name == child_2.name):
if self.crossover_attr(child_1, child_2, body_attr):
return True
# Strategy 2: Find matches anywhere
for child_1 in random.sample(body_1, len(body_1)):
for child_2 in random.sample(body_2, len(body_2)):
if self.crossover_attr(child_1, child_2, body_attr):
return True
return False
###Output
_____no_output_____
###Markdown
We have a special case for `if` nodes, where we can cross their body and `else` branches. (In Python, `for` and `while` also have `else` branches, but swapping these with loop bodies is likely to create havoc.)
###Code
class CrossoverOperator(CrossoverOperator):
def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool:
"""Special case:
`t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'`
becomes
`t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1`
Returns True if successful.
"""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and
hasattr(t2, 'body') and hasattr(t2, 'orelse')):
t1 = cast(ast.If, t1) # keep mypy happy
t2 = cast(ast.If, t2)
if self.log:
print(f"Crossing branches {t1} x {t2}")
t1.body, t1.orelse, t2.body, t2.orelse = \
t2.orelse, t2.body, t1.orelse, t1.body
return True
return False
###Output
_____no_output_____
###Markdown
The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful.
###Code
class CrossoverOperator(CrossoverOperator):
def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]:
"""Do a crossover of ASTs `t1` and `t2`.
Raises `CrossoverError` if no crossover is found."""
assert isinstance(t1, ast.AST)
assert isinstance(t2, ast.AST)
for body_attr in ['body', 'orelse', 'finalbody']:
if self.crossover_attr(t1, t2, body_attr):
return t1, t2
raise CrossoverError("No crossover found")
class CrossoverError(ValueError):
pass
###Output
_____no_output_____
###Markdown
End of Excursion Crossover in Action Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:
###Code
def p1(): # type: ignore
if True:
print(1)
print(2)
print(3)
def p2(): # type: ignore
if True:
print(a)
print(b)
else:
print(c)
print(d)
###Output
_____no_output_____
###Markdown
We invoke the `crossover()` method with two ASTs from `p1` and `p2`:
###Code
crossover = CrossoverOperator()
tree_p1 = ast.parse(inspect.getsource(p1))
tree_p2 = ast.parse(inspect.getsource(p2))
crossover.crossover(tree_p1, tree_p2);
###Output
_____no_output_____
###Markdown
Here is the crossed offspring, mixing statement lists of `p1` and `p2`:
###Code
print_content(ast.unparse(tree_p1), '.py')
print_content(ast.unparse(tree_p2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34melse[39;49;00m:
[36mprint[39;49;00m([34m1[39;49;00m)
[36mprint[39;49;00m([34m2[39;49;00m)
[36mprint[39;49;00m([34m3[39;49;00m)
###Markdown
Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`.
###Code
middle_t1, middle_t2 = crossover.crossover(middle_tree(),
ast.parse(inspect.getsource(p2)))
###Output
_____no_output_____
###Markdown
We see how the resulting offspring encompasses elements of both sources:
###Code
print_content(ast.unparse(middle_t1), '.py')
print_content(ast.unparse(middle_t2), '.py')
###Output
[34mdef[39;49;00m [32mp2[39;49;00m():
[34mif[39;49;00m [34mTrue[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34melif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
###Markdown
A Repairer ClassSo far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate:```pythondebugger = OchiaiDebugger()with debugger: with debugger: ...repairer = Repairer(debugger)repairer.repair()``` Excursion: Implementing Repairer The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below.
###Code
from StackInspector import StackInspector # minor dependency
class Repairer(StackInspector):
"""A class for automatic repair of Python programs"""
def __init__(self, debugger: RankingDebugger, *,
targets: Optional[List[Any]] = None,
sources: Optional[List[Any]] = None,
log: Union[bool, int] = False,
mutator_class: Type = StatementMutator,
crossover_class: Type = CrossoverOperator,
reducer_class: Type = DeltaDebugger,
globals: Optional[Dict[str, Any]] = None):
"""Constructor.
`debugger`: a `RankingDebugger` to take tests and coverage from.
`targets`: a list of functions/modules to be repaired.
(default: the covered functions in `debugger`, except tests)
`sources`: a list of functions/modules to take repairs from.
(default: same as `targets`)
`globals`: if given, a `globals()` dict for executing targets
(default: `globals()` of caller)"""
assert isinstance(debugger, RankingDebugger)
self.debugger = debugger
self.log = log
if targets is None:
targets = self.default_functions()
if not targets:
raise ValueError("No targets to repair")
if sources is None:
sources = self.default_functions()
if not sources:
raise ValueError("No sources to take repairs from")
if self.debugger.function() is None:
raise ValueError("Multiple entry points observed")
self.target_tree: ast.AST = self.parse(targets)
self.source_tree: ast.AST = self.parse(sources)
self.log_tree("Target code to be repaired:", self.target_tree)
if ast.dump(self.target_tree) != ast.dump(self.source_tree):
self.log_tree("Source code to take repairs from:",
self.source_tree)
self.fitness_cache: Dict[str, float] = {}
self.mutator: StatementMutator = \
mutator_class(
source=all_statements(self.source_tree),
suspiciousness_func=self.debugger.suspiciousness,
log=(self.log >= 3))
self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3))
self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3))
if globals is None:
globals = self.caller_globals() # see below
self.globals = globals
###Output
_____no_output_____
###Markdown
When we access or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`. Helper FunctionsThe constructor uses a number of helper functions to create its environment.
###Code
class Repairer(Repairer):
def getsource(self, item: Union[str, Any]) -> str:
"""Get the source for `item`. Can also be a string."""
if isinstance(item, str):
item = self.globals[item]
return inspect.getsource(item)
class Repairer(Repairer):
def default_functions(self) -> List[Callable]:
"""Return the set of functions to be repaired.
Functions whose names start or end in `test` are excluded."""
def is_test(name: str) -> bool:
return name.startswith('test') or name.endswith('test')
return [func for func in self.debugger.covered_functions()
if not is_test(func.__name__)]
class Repairer(Repairer):
def log_tree(self, description: str, tree: Any) -> None:
"""Print out `tree` as source code prefixed by `description`."""
if self.log:
print(description)
print_content(ast.unparse(tree), '.py')
print()
print()
class Repairer(Repairer):
def parse(self, items: List[Any]) -> ast.AST:
"""Read in a list of items into a single tree"""
tree = ast.parse("")
for item in items:
if isinstance(item, str):
item = self.globals[item]
item_lines, item_first_lineno = inspect.getsourcelines(item)
try:
item_tree = ast.parse("".join(item_lines))
except IndentationError:
# inner function or likewise
warnings.warn(f"Can't parse {item.__name__}")
continue
ast.increment_lineno(item_tree, item_first_lineno - 1)
tree.body += item_tree.body
return tree
###Output
_____no_output_____
###Markdown
Running TestsNow that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected.
###Code
class Repairer(Repairer):
def run_test_set(self, test_set: str, validate: bool = False) -> int:
"""
Run given `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
If `validate` is set, check expectations.
Return number of passed tests.
"""
passed = 0
collectors = self.debugger.collectors[test_set]
function = self.debugger.function()
assert function is not None
# FIXME: function may have been redefined
for c in collectors:
if self.log >= 4:
print(f"Testing {c.id()}...", end="")
try:
function(**c.args())
except Exception as err:
if self.log >= 4:
print(f"failed ({err.__class__.__name__})")
if validate and test_set == self.debugger.PASS:
raise err.__class__(
f"{c.id()} should have passed, but failed")
continue
passed += 1
if self.log >= 4:
print("passed")
if validate and test_set == self.debugger.FAIL:
raise FailureNotReproducedError(
f"{c.id()} should have failed, but passed")
return passed
class FailureNotReproducedError(ValueError):
pass
###Output
_____no_output_____
###Markdown
Here is how we use `run_tests_set()`:
###Code
repairer = Repairer(middle_debugger)
assert repairer.run_test_set(middle_debugger.PASS) == \
len(MIDDLE_PASSING_TESTCASES)
assert repairer.run_test_set(middle_debugger.FAIL) == 0
###Output
_____no_output_____
###Markdown
The method `run_tests()` runs passing and failing tests, weighing the passed test cases to obtain the overall fitness.
###Code
class Repairer(Repairer):
def weight(self, test_set: str) -> float:
"""
Return the weight of `test_set`
(`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`).
"""
return {
self.debugger.PASS: WEIGHT_PASSING,
self.debugger.FAIL: WEIGHT_FAILING
}[test_set]
def run_tests(self, validate: bool = False) -> float:
"""Run passing and failing tests, returning weighted fitness."""
fitness = 0.0
for test_set in [self.debugger.PASS, self.debugger.FAIL]:
passed = self.run_test_set(test_set, validate=validate)
ratio = passed / len(self.debugger.collectors[test_set])
fitness += self.weight(test_set) * ratio
return fitness
###Output
_____no_output_____
###Markdown
The method `validate()` ensures the observed tests can be adequately reproduced.
###Code
class Repairer(Repairer):
def validate(self) -> None:
fitness = self.run_tests(validate=True)
assert fitness == self.weight(self.debugger.PASS)
repairer = Repairer(middle_debugger)
repairer.validate()
###Output
_____no_output_____
###Markdown
(Re)defining FunctionsOur `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness.
###Code
class Repairer(Repairer):
def fitness(self, tree: ast.AST) -> float:
"""Test `tree`, returning its fitness"""
key = cast(str, ast.dump(tree))
if key in self.fitness_cache:
return self.fitness_cache[key]
# Save defs
original_defs: Dict[str, Any] = {}
for name in self.toplevel_defs(tree):
if name in self.globals:
original_defs[name] = self.globals[name]
else:
warnings.warn(f"Couldn't find definition of {repr(name)}")
assert original_defs, f"Couldn't find any definition"
if self.log >= 3:
print("Repair candidate:")
print_content(ast.unparse(tree), '.py')
print()
# Create new definition
try:
code = compile(tree, '<Repairer>', 'exec')
except ValueError: # Compilation error
code = None
if code is None:
if self.log >= 3:
print(f"Fitness = 0.0 (compilation error)")
fitness = 0.0
return fitness
# Execute new code, defining new functions in `self.globals`
exec(code, self.globals)
# Set new definitions in the namespace (`__globals__`)
# of the function we will be calling.
function = self.debugger.function()
assert function is not None
assert hasattr(function, '__globals__')
for name in original_defs:
function.__globals__[name] = self.globals[name] # type: ignore
fitness = self.run_tests(validate=False)
# Restore definitions
for name in original_defs:
function.__globals__[name] = original_defs[name] # type: ignore
self.globals[name] = original_defs[name]
if self.log >= 3:
print(f"Fitness = {fitness}")
self.fitness_cache[key] = fitness
return fitness
###Output
_____no_output_____
###Markdown
The helper function `toplevel_defs()` helps saving and restoring the environment before and after redefining the function under repair.
###Code
class Repairer(Repairer):
def toplevel_defs(self, tree: ast.AST) -> List[str]:
"""Return a list of names of defined functions and classes in `tree`"""
visitor = DefinitionVisitor()
visitor.visit(tree)
assert hasattr(visitor, 'definitions')
return visitor.definitions
class DefinitionVisitor(NodeVisitor):
def __init__(self) -> None:
self.definitions: List[str] = []
def add_definition(self, node: Union[ast.ClassDef,
ast.FunctionDef,
ast.AsyncFunctionDef]) -> None:
self.definitions.append(node.name)
def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
self.add_definition(node)
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
self.add_definition(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
self.add_definition(node)
###Output
_____no_output_____
###Markdown
Here's an example for `fitness()`:
###Code
repairer = Repairer(middle_debugger, log=1)
good_fitness = repairer.fitness(middle_tree())
good_fitness
# docassert
assert good_fitness >= 0.99, "fitness() failed"
bad_middle_tree = ast.parse("def middle(x, y, z): return x")
bad_fitness = repairer.fitness(bad_middle_tree)
bad_fitness
# docassert
assert bad_fitness < 0.5, "fitness() failed"
###Output
_____no_output_____
###Markdown
RepairingNow for the actual `repair()` method, which creates a `population` and then evolves it until the fitness is 1.0 or the given number of iterations is spent.
###Code
import traceback
class Repairer(Repairer):
def initial_population(self, size: int) -> List[ast.AST]:
"""Return an initial population of size `size`"""
return [self.target_tree] + \
[self.mutator.mutate(copy.deepcopy(self.target_tree))
for i in range(size - 1)]
def repair(self, population_size: int = POPULATION_SIZE, iterations: int = 100) -> \
Tuple[ast.AST, float]:
"""
Repair the function we collected test runs from.
Use a population size of `population_size` and
at most `iterations` iterations.
Returns a pair (`ast`, `fitness`) where
`ast` is the AST of the repaired function, and
`fitness` is its fitness (between 0 and 1.0)
"""
self.validate()
population = self.initial_population(population_size)
last_key = ast.dump(self.target_tree)
for iteration in range(iterations):
population = self.evolve(population)
best_tree = population[0]
fitness = self.fitness(best_tree)
if self.log:
print(f"Evolving population: "
f"iteration{iteration:4}/{iterations} "
f"fitness = {fitness:.5} \r", end="")
if self.log >= 2:
best_key = ast.dump(best_tree)
if best_key != last_key:
print()
print()
self.log_tree(f"New best code (fitness = {fitness}):",
best_tree)
last_key = best_key
if fitness >= 1.0:
break
if self.log:
print()
if self.log and self.log < 2:
self.log_tree(f"Best code (fitness = {fitness}):", best_tree)
best_tree = self.reduce(best_tree)
fitness = self.fitness(best_tree)
self.log_tree(f"Reduced code (fitness = {fitness}):", best_tree)
return best_tree, fitness
###Output
_____no_output_____
###Markdown
EvolvingThe evolution of our population takes place in the `evolve()` method. In contrast to the `evolve_middle()` function, above, we use crossover to create the offspring, which we still mutate afterwards.
###Code
class Repairer(Repairer):
def evolve(self, population: List[ast.AST]) -> List[ast.AST]:
"""Evolve the candidate population by mutating and crossover."""
n = len(population)
# Create offspring as crossover of parents
offspring: List[ast.AST] = []
while len(offspring) < n:
parent_1 = copy.deepcopy(random.choice(population))
parent_2 = copy.deepcopy(random.choice(population))
try:
self.crossover.crossover(parent_1, parent_2)
except CrossoverError:
pass # Just keep parents
offspring += [parent_1, parent_2]
# Mutate offspring
offspring = [self.mutator.mutate(tree) for tree in offspring]
# Add it to population
population += offspring
# Keep the fitter part of the population
population.sort(key=self.fitness_key, reverse=True)
population = population[:n]
return population
###Output
_____no_output_____
###Markdown
A second difference is that we not only sort by fitness, but also by tree size – with equal fitness, a smaller tree thus will be favored. This helps keeping fixes and patches small.
###Code
class Repairer(Repairer):
def fitness_key(self, tree: ast.AST) -> Tuple[float, int]:
"""Key to be used for sorting the population"""
tree_size = len([node for node in ast.walk(tree)])
return (self.fitness(tree), -tree_size)
###Output
_____no_output_____
###Markdown
SimplifyingThe last step in repairing is simplifying the code. As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of superfluous statements. To this end, we convert the tree to lines, run delta debugging on them, and then convert it back to a tree.
###Code
class Repairer(Repairer):
def reduce(self, tree: ast.AST) -> ast.AST:
"""Simplify `tree` using delta debugging."""
original_fitness = self.fitness(tree)
source_lines = ast.unparse(tree).split('\n')
with self.reducer:
self.test_reduce(source_lines, original_fitness)
reduced_lines = self.reducer.min_args()['source_lines']
reduced_source = "\n".join(reduced_lines)
return ast.parse(reduced_source)
###Output
_____no_output_____
###Markdown
As dicussed above, we simplify the code by having the test function (`test_reduce()`) declare reaching the maximum fitness obtained so far as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists.
###Code
class Repairer(Repairer):
def test_reduce(self, source_lines: List[str], original_fitness: float) -> None:
"""Test function for delta debugging."""
try:
source = "\n".join(source_lines)
tree = ast.parse(source)
fitness = self.fitness(tree)
assert fitness < original_fitness
except AssertionError:
raise
except SyntaxError:
raise
except IndentationError:
raise
except Exception:
# traceback.print_exc() # Uncomment to see internal errors
raise
###Output
_____no_output_____
###Markdown
End of Excursion Repairer in ActionLet us go and apply `Repairer` in practice. We initialize it with `middle_debugger`, which has (still) collected the passing and failing runs for `middle_test()`. We also set `log` for some diagnostics along the way.
###Code
repairer = Repairer(middle_debugger, log=True)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We now invoke `repair()` to evolve our population. After a few iterations, we find a best tree with perfect fitness.
###Code
best_tree, fitness = repairer.repair()
print_content(ast.unparse(best_tree), '.py')
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Again, we have a perfect solution. Here, we did not even need to simplify the code in the last iteration, as our `fitness_key()` function favors smaller implementations. Removing HTML MarkupLet us apply `Repairer` on our other ongoing example, namely `remove_html_markup()`.
###Code
def remove_html_markup(s): # type: ignore
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
def remove_html_markup_tree() -> ast.AST:
return ast.parse(inspect.getsource(remove_html_markup))
###Output
_____no_output_____
###Markdown
To run `Repairer` on `remove_html_markup()`, we need a test and a test suite. `remove_html_markup_test()` raises an exception if applying `remove_html_markup()` on the given `html` string does not yield the `plain` string.
###Code
def remove_html_markup_test(html: str, plain: str) -> None:
outcome = remove_html_markup(html)
assert outcome == plain, \
f"Got {repr(outcome)}, expected {repr(plain)}"
###Output
_____no_output_____
###Markdown
Now for the test suite. We use a simple fuzzing scheme to create dozens of passing and failing test cases in `REMOVE_HTML_PASSING_TESTCASES` and `REMOVE_HTML_FAILING_TESTCASES`, respectively. Excursion: Creating HTML Test Cases
###Code
def random_string(length: int = 5, start: int = ord(' '), end: int = ord('~')) -> str:
return "".join(chr(random.randrange(start, end + 1)) for i in range(length))
random_string()
def random_id(length: int = 2) -> str:
return random_string(start=ord('a'), end=ord('z'))
random_id()
def random_plain() -> str:
return random_string().replace('<', '').replace('>', '')
def random_string_noquotes() -> str:
return random_string().replace('"', '').replace("'", '')
def random_html(depth: int = 0) -> Tuple[str, str]:
prefix = random_plain()
tag = random_id()
if depth > 0:
html, plain = random_html(depth - 1)
else:
html = plain = random_plain()
attr = random_id()
value = '"' + random_string_noquotes() + '"'
postfix = random_plain()
return f'{prefix}<{tag} {attr}={value}>{html}</{tag}>{postfix}', \
prefix + plain + postfix
random_html()
def remove_html_testcase(expected: bool = True) -> Tuple[str, str]:
while True:
html, plain = random_html()
outcome = (remove_html_markup(html) == plain)
if outcome == expected:
return html, plain
REMOVE_HTML_TESTS = 100
REMOVE_HTML_PASSING_TESTCASES = \
[remove_html_testcase(True) for i in range(REMOVE_HTML_TESTS)]
REMOVE_HTML_FAILING_TESTCASES = \
[remove_html_testcase(False) for i in range(REMOVE_HTML_TESTS)]
###Output
_____no_output_____
###Markdown
End of Excursion Here is a passing test case:
###Code
REMOVE_HTML_PASSING_TESTCASES[0]
html, plain = REMOVE_HTML_PASSING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
Here is a failing test case (containing a double quote in the plain text)
###Code
REMOVE_HTML_FAILING_TESTCASES[0]
with ExpectError():
html, plain = REMOVE_HTML_FAILING_TESTCASES[0]
remove_html_markup_test(html, plain)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_52227/2578453007.py", line 3, in <module>
remove_html_markup_test(html, plain)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_52227/700130947.py", line 3, in remove_html_markup_test
assert outcome == plain, \
AssertionError: Got '3AGe7!%H</qcguk>6azh_', expected '3AGe7"!%H6azh_' (expected)
###Markdown
We run our tests, collecting the outcomes in `html_debugger`.
###Code
html_debugger = OchiaiDebugger()
for html, plain in (REMOVE_HTML_PASSING_TESTCASES +
REMOVE_HTML_FAILING_TESTCASES):
with html_debugger:
remove_html_markup_test(html, plain)
###Output
_____no_output_____
###Markdown
The suspiciousness distribution will not be of much help here – pretty much all lines in `remove_html_markup()` have the same suspiciousness.
###Code
html_debugger
###Output
_____no_output_____
###Markdown
Let us create our repairer and run it.
###Code
html_repairer = Repairer(html_debugger, log=True)
best_tree, fitness = html_repairer.repair(iterations=20)
# docassert
assert fitness < 1.0
###Output
_____no_output_____
###Markdown
We see that the "best" code is still our original code, with no changes. And we can set `iterations` to 50, 100, 200... – our `Repairer` won't be able to repair it.
###Code
quiz("Why couldn't `Repairer()` repair `remove_html_markup()`?",
[
"The population is too small!",
"The suspiciousness is too evenly distributed!",
"We need more test cases!",
"We need more iterations!",
"There is no statement in the source with a correct condition!",
"The population is too big!",
], '5242880 >> 20')
###Output
_____no_output_____
###Markdown
You can explore all of the hypotheses above by changing the appropriate parameters, but you won't be able to change the outcome. The problem is that, unlike `middle()`, there is no statement (or combination thereof) in `remove_html_markup()` that could be used to make the failure go away. For this, we need to mutate another aspect of the code, which we will explore in the next section. Mutating ConditionsThe `Repairer` class is very configurable. The individual steps in automated repair can all be replaced by providing own classes in the keyword arguments of its `__init__()` constructor:* To change fault localization, pass a different `debugger` that is a subclass of `RankingDebugger`.* To change the mutation operator, set `mutator_class` to a subclass of `StatementMutator`.* To change the crossover operator, set `crossover_class` to a subclass of `CrossoverOperator`.* To change the reduction algorithm, set `reducer_class` to a subclass of `Reducer`.In this section, we will explore how to extend the mutation operator such that it can mutate _conditions_ for control constructs such as `if`, `while`, or `for`. To this end, we introduce a new class `ConditionMutator` subclassing `StatementMutator`. Collecting ConditionsLet us start with a few simple supporting functions. The function `all_conditions()` retrieves all control conditions from an AST.
###Code
def all_conditions(trees: Union[ast.AST, List[ast.AST]],
tp: Optional[Type] = None) -> List[ast.expr]:
"""
Return all conditions from the AST (or AST list) `trees`.
If `tp` is given, return only elements of that type.
"""
if not isinstance(trees, list):
assert isinstance(trees, ast.AST)
trees = [trees]
visitor = ConditionVisitor()
for tree in trees:
visitor.visit(tree)
conditions = visitor.conditions
if tp is not None:
conditions = [c for c in conditions if isinstance(c, tp)]
return conditions
###Output
_____no_output_____
###Markdown
`all_conditions()` uses a `ConditionVisitor` class to walk the tree and collect the conditions:
###Code
class ConditionVisitor(NodeVisitor):
def __init__(self) -> None:
self.conditions: List[ast.expr] = []
self.conditions_seen: Set[str] = set()
super().__init__()
def add_conditions(self, node: ast.AST, attr: str) -> None:
elems = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems]
elems = cast(List[ast.expr], elems)
for elem in elems:
elem_str = ast.unparse(elem)
if elem_str not in self.conditions_seen:
self.conditions.append(elem)
self.conditions_seen.add(elem_str)
def visit_BoolOp(self, node: ast.BoolOp) -> ast.AST:
self.add_conditions(node, 'values')
return super().generic_visit(node)
def visit_UnaryOp(self, node: ast.UnaryOp) -> ast.AST:
if isinstance(node.op, ast.Not):
self.add_conditions(node, 'operand')
return super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> ast.AST:
if hasattr(node, 'test'):
self.add_conditions(node, 'test')
return super().generic_visit(node)
###Output
_____no_output_____
###Markdown
Here are all the conditions in `remove_html_markup()`. This is some material to construct new conditions from.
###Code
[ast.unparse(cond).strip()
for cond in all_conditions(remove_html_markup_tree())]
###Output
_____no_output_____
###Markdown
Mutating ConditionsHere comes our `ConditionMutator` class. We subclass from `StatementMutator` and set an attribute `self.conditions` containing all the conditions in the source. The method `choose_condition()` randomly picks a condition.
###Code
class ConditionMutator(StatementMutator):
"""Mutate conditions in an AST"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are as with `StatementMutator` constructor."""
super().__init__(*args, **kwargs)
self.conditions = all_conditions(self.source)
if self.log:
print("Found conditions",
[ast.unparse(cond).strip()
for cond in self.conditions])
def choose_condition(self) -> ast.expr:
"""Return a random condition from source."""
return copy.deepcopy(random.choice(self.conditions))
###Output
_____no_output_____
###Markdown
The actual mutation takes place in the `swap()` method. If the node to be replaced has a `test` attribute (i.e. a controlling predicate), then we pick a random condition `cond` from the source and randomly chose from:* **set**: We change `test` to `cond`.* **not**: We invert `test`.* **and**: We replace `test` by `cond and test`.* **or**: We replace `test` by `cond or test`.Over time, this might lead to operators propagating across the population.
###Code
class ConditionMutator(ConditionMutator):
def choose_bool_op(self) -> str:
return random.choice(['set', 'not', 'and', 'or'])
def swap(self, node: ast.AST) -> ast.AST:
"""Replace `node` condition by a condition from `source`"""
if not hasattr(node, 'test'):
return super().swap(node)
node = cast(ast.If, node)
cond = self.choose_condition()
new_test = None
choice = self.choose_bool_op()
if choice == 'set':
new_test = cond
elif choice == 'not':
new_test = ast.UnaryOp(op=ast.Not(), operand=node.test)
elif choice == 'and':
new_test = ast.BoolOp(op=ast.And(), values=[cond, node.test])
elif choice == 'or':
new_test = ast.BoolOp(op=ast.Or(), values=[cond, node.test])
else:
raise ValueError("Unknown boolean operand")
if new_test:
# ast.copy_location(new_test, node)
node.test = new_test
return node
###Output
_____no_output_____
###Markdown
We can use the mutator just like `StatementMutator`, except that some of the mutations will also include new conditions:
###Code
mutator = ConditionMutator(source=all_statements(remove_html_markup_tree()),
log=True)
for i in range(10):
new_tree = mutator.mutate(remove_html_markup_tree())
###Output
2:insert: 'tag = False' becomes 'for c in s: tag = Fa...'
10:insert: 'tag = False' becomes 'tag = False'; 'out = out + c'
8:insert: 'tag = True' becomes 'if c == \'"\' or (c ==...'
12:insert: 'quote = not quote' becomes 'quote = not quote'; 'tag = True'
10:delete: 'tag = False' becomes 'pass'
12:insert: 'quote = not quote' becomes "if c == '>' and (not..."
3:insert: 'quote = False' becomes 'quote = False'; "out = ''"
14:swap: 'out = out + c' becomes 'quote = False'
12:insert: 'quote = not quote' becomes 'for c in s: quote = ...'
3:delete: 'quote = False' becomes 'pass'
###Markdown
Let us put our new mutator to action, again in a `Repairer()`. To activate it, all we need to do is to pass it as `mutator_class` keyword argument.
###Code
condition_repairer = Repairer(html_debugger,
mutator_class=ConditionMutator,
log=2)
###Output
Target code to be repaired:
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
We might need more iterations for this one. Let us see...
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
repaired_source = ast.unparse(best_tree)
print_content(repaired_source, '.py')
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Success again! We have automatically repaired `remove_html_markup()` – the resulting code passes all tests, including those that were previously failing. Again, we can present the fix as a patch:
###Code
original_source = ast.unparse(remove_html_markup_tree())
for patch in diff(original_source, repaired_source):
print_patch(patch)
###Output
@@ -[34m210[39;49;00m,[34m53[39;49;00m +[34m210[39;49;00m,[34m39[39;49;00m @@
lse
- [34melif[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag):
+ [34melif[39;49;00m tag [35mand[39;49;00m c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m:
###Markdown
However, looking at the patch, one may come up with doubts.
###Code
quiz("Is this actually the best solution?",
[
"Yes, sure, of course. Why?",
"Err - what happened to single quotes?"
], 1 << 1)
###Output
_____no_output_____
###Markdown
Indeed – our solution does not seem to handle single quotes anymore. Why is that so?
###Code
quiz("Why aren't single quotes handled in the solution?",
[
"Because they're not important. "
"I mean, y'know, who uses 'em anyway?",
"Because they are not part of our tests? "
"Let me look up how they are constructed..."
], 1 << 1)
###Output
_____no_output_____
###Markdown
Correct! Our test cases do not include single quotes – at least not in the interior of HTML tags – and thus, automatic repair did not care to preserve their handling. How can we fix this? An easy way is to include an appropriate test case in our set – a test case that passes with the original `remove_html_markup()`, yet fails with the "repaired" `remove_html_markup()` as whosn above.
###Code
with html_debugger:
remove_html_markup_test("<foo quote='>abc'>me</foo>", "me")
###Output
_____no_output_____
###Markdown
Let us repeat the repair with the extended test set:
###Code
best_tree, fitness = condition_repairer.repair(iterations=200)
###Output
Evolving population: iteration 2/200 fitness = 1.0
New best code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
tag = [34mFalse[39;49;00m
[34mreturn[39;49;00m out
Reduced code (fitness = 1.0):
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
Here is the final tree:
###Code
print_content(ast.unparse(best_tree), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s):
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m'[39;49;00m[33m'[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m ([35mnot[39;49;00m quote):
tag = [34mFalse[39;49;00m
[34melif[39;49;00m tag [35mand[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m (c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m [35mand[39;49;00m tag)):
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mif[39;49;00m [35mnot[39;49;00m tag:
[34mreturn[39;49;00m out
###Markdown
And here is its fitness:
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
The revised candidate now passes _all_ tests (including the tricky quote test we added last). Its condition now properly checks for `tag` _and_ both quotes. (The `tag` inside the parentheses is still redundant, but so be it.) From this example, we can learn a few lessons about the possibilities and risks of automated repair:* First, automatic repair is highly dependent on the quality of the checking tests. The risk is that the repair may overspecialize towards the test.* Second, when based on "plastic surgery", automated repair is highly dependent on the sources that program fragments are chosen from. If there is a hint of a solution somewhere in the code, there is a chance that automated repair will catch it up.* Third, automatic repair is a deeply heuristic approach. Its behavior will vary widely with any change to the parameters (and the underlying random number generators).* Fourth, automatic repair can take a long time. The examples we have in this chapter take less than a minute to compute, and neither Python nor our implementation is exactly fast. But as the search space grows, automated repair will take much longer.On the other hand, even an incomplete automated repair candidate can be much better than nothing at all – it may provide all the essential ingredients (such as the location or the involved variables) for a successful fix. When users of automated repair techniques are aware of its limitations and its assumptions, there is lots of potential in automated repair. Enjoy! Limitations The `Repairer` class is tested on our example programs, but not much more. Things that do not work include* Functions with inner functions are not repaired. Synopsis This chapter provides tools and techniques for automated repair of program code. The `Repairer` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from the [chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception. The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods whose name starts or ends in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythontree, fitness = repairer.repair()print(ast.unparse(tree), fitness)``` Here is a complete example for the `middle()` program. This is the original source code of `middle()`:
###Code
# ignore
print_content(middle_source, '.py')
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
[34mif[39;49;00m y < z:
[34mif[39;49;00m x < y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x < z:
[34mreturn[39;49;00m y
[34melse[39;49;00m:
[34mif[39;49;00m x > y:
[34mreturn[39;49;00m y
[34melif[39;49;00m x > z:
[34mreturn[39;49;00m x
[34mreturn[39;49;00m z
###Markdown
We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:
###Code
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z)
###Output
_____no_output_____
###Markdown
The repairer is instantiated with the debugger used (`middle_debugger`):
###Code
middle_repairer = Repairer(middle_debugger)
###Output
_____no_output_____
###Markdown
The `repair()` method of the repairer attempts to repair the function invoked by the test (`middle()`).
###Code
tree, fitness = middle_repairer.repair()
###Output
_____no_output_____
###Markdown
The returned AST `tree` can be output via `ast.unparse()`:
###Code
print(ast.unparse(tree))
###Output
def middle(x, y, z):
if y < z:
if x < y:
return y
elif x < z:
return x
elif x > y:
return y
elif x > z:
return x
return z
###Markdown
The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests.
###Code
fitness
# docassert
assert fitness >= 1.0
###Output
_____no_output_____
###Markdown
Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful. Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([Repairer, ConditionMutator, CrossoverOperator],
abstract_classes=[
NodeVisitor,
NodeTransformer
],
public_methods=[
Repairer.__init__,
Repairer.repair,
StatementMutator.__init__,
StatementMutator.mutate,
ConditionMutator.__init__,
CrossoverOperator.__init__,
CrossoverOperator.crossover,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Automated repair based on genetic optimization uses five ingredients: 1. A _test suite_ to determine passing and failing tests 2. _Defect localization_ (typically obtained from [statistical debugging](StatisticalDebugger.ipynb) with the test suite) to determine potential locations to be fixed 3. _Random code mutations_ and _crossover operations_ to create and evolve a population of inputs 4. A _fitness function_ and a _selection strategy_ to determine the part of the population that should be evolved further 5. A _reducer_ such as [delta debugging](DeltaDebugger.ipynb) to simplify the final candidate with the highest fitness.* The result of automated repair is a _fix candidate_ with the highest fitness for the given tests.* A _fix candidate_ is not guaranteed to be correct or optimal, but gives important hints on how to fix the program.* All of the above ingredients offer plenty of settings and alternatives to experiment with. BackgroundThe seminal work in automated repair is [GenProg](https://squareslab.github.io/genprog-code/) \cite{LeGoues2012}, which heavily inspired our `Repairer` implementation. Major differences between GenProg and `Repairer` include:* GenProg includes its own defect localization (which is also dynamically updated), whereas `Repairer` builds on earlier statistical debugging.* GenProg can apply multiple mutations on programs (or none at all), whereas `Repairer` applies exactly one mutation.* The `StatementMutator` used by `Repairer` includes various special cases for program structures (`if`, `for`, `while`...), whereas GenProg operates on statements only.* GenProg has been tested on large production programs.While GenProg is _the_ seminal work in the area (and arguably the most important software engineering research contribution of the 2010s), there have been a number of important extensions of automated repair. These include:* *AutoFix* \cite{Pei2014} leverages _program contracts_ (pre- and postconditions) to generate tests and assertions automatically. Not only do such [assertions](Assertions.ipynb) help in fault localization, they also allow for much better validation of fix candidates.* *SemFix* \cite{Nguyen2013} and its successor *[Angelix](http://angelix.io)* \cite{Mechtaev2016}introduce automated program repair based on _symbolic analysis_ rather than genetic optimization. This allows to leverage program semantics, which GenProg does not consider.To learn more about automated program repair, see [program-repair.org](http://program-repair.org), the community page dedicated to research in program repair. Exercises Exercise 1: Automated Repair ParametersAutomated Repair is influenced by a large number of design choices – the size of the population, the number of iterations, the genetic optimization strategy, and more. How do changes to these design choices affect its effectiveness? * Consider the constants defined in this chapter (such as `POPULATION_SIZE` or `WEIGHT_PASSING` vs. `WEIGHT_FAILING`). How do changes affect the effectiveness of automated repair?* As an effectiveness metric, consider the number of iterations it takes to produce a fix candidate.* Since genetic optimization is a random algorithm, you need to determine effectiveness averages over a large number of runs (say, 100). Exercise 2: Elitism[_Elitism_](https://en.wikipedia.org/wiki/Genetic_algorithmElitism) (also known as _elitist selection_) is a variant of genetic selection in which a small fraction of the fittest candidates of the last population are included unchanged in the offspring.* Implement elitist selection by subclassing the `evolve()` method. Experiment with various fractions (5%, 10%, 25%) of "elites" and see how this improves results. Exercise 3: Evolving ValuesFollowing the steps of `ConditionMutator`, implement a `ValueMutator` class that replaces one constant value by another one found in the source (say, `0` by `1` or `True` by `False`).For validation, consider the following failure in the `square_root()` function from the [chapter on assertions](Assertions.ipynb):
###Code
from Assertions import square_root # minor dependency
with ExpectError():
square_root_of_zero = square_root(0)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_52227/1107282428.py", line 2, in <module>
square_root_of_zero = square_root(0)
File "/Users/zeller/Projects/debuggingbook/notebooks/Assertions.ipynb", line 61, in square_root
guess = (approx + x / approx) / 2
ZeroDivisionError: float division by zero (expected)
###Markdown
Can your `ValueMutator` automatically fix this failure? **Solution.** Your solution will be effective if it also includes named constants such as `None`.
###Code
import math
def square_root_fixed(x): # type: ignore
assert x >= 0 # precondition
approx = 0 # <-- FIX: Change `None` to 0
guess = x / 2
while approx != guess:
approx = guess
guess = (approx + x / approx) / 2
assert math.isclose(approx * approx, x)
return approx
square_root_fixed(0)
###Output
_____no_output_____ |
ES.General_All_GridSearch.ipynb | ###Markdown
* Hyperparameter tuning of All classifiers for emotional state detection* 6 fold cross validation with grid-search* Multiclass classification
###Code
import pandas as pd
import datetime
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from pprint import pprint
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.feature_selection import SelectFromModel,RFECV
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, StratifiedKFold, cross_val_score, PredefinedSplit
from sklearn.feature_selection import SelectKBest, mutual_info_classif
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, StratifiedKFold, cross_val_score
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn import metrics
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import SMOTENC
from imblearn.over_sampling import ADASYN
from imblearn.over_sampling import SVMSMOTE
from imblearn.combine import SMOTEENN
from imblearn.combine import SMOTETomek
pd.options.mode.chained_assignment = None
import re
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#warnings.filterwarnings('always')
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.decomposition import PCA
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from sklearn.metrics import classification_report
from sklearn.metrics import cohen_kappa_score
from imblearn.metrics import specificity_score
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import make_scorer, f1_score, roc_auc_score, precision_score, recall_score, confusion_matrix
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from xgboost import XGBClassifier
from catboost import CatBoostClassifier, Pool, cv
from sklearn.neural_network import MLPClassifier
#from pandas_ml import ConfusionMatrix
#import collections
def read_input(p): #
#Read input file of each person
filename='data/NOv_w5_emotionLabel_SelFeat_p'+str(p)+'.csv'
raw_df= pd.read_csv(filename)
print("The shape of the dataframe is ",raw_df.shape)
return raw_df
# replace NANs with -999
def prep_data(data):
return data.fillna(-999)
#drop columns
def drop_cols(data, col_list):
return data.drop(col_list, axis=1)
# normalize data with minmax
def scale_data(trn_x, tst_x):
sc= StandardScaler()
scaled_trn_x = sc.fit_transform(trn_x)
scaled_tst_x = sc.fit_transform(tst_x)
return scaled_trn_x, scaled_tst_x
# oversampling with SMOTE with 'minority' and 'not majority'
def over_sample_SMOTE(X_train, y_train):
sm=SMOTE(sampling_strategy='not majority', random_state=10) # 'minority'
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
# oversampling with SMOTENC with 'minority' and 'not majority'
def over_sample_SMOTENC(X_train, y_train):
sm = SMOTENC(sampling_strategy='not majority',random_state=10)
#sm = SMOTENC(sampling_strategy='minority',random_state=10)
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
# oversampling with SVMSMOTE
def over_sample_SVMSMOTE(X_train, y_train):
sm=SVMSMOTE(random_state=10)
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
def merge_dataframes(p_list):
df = pd.DataFrame()
for p in p_list:
new_df = read_input(p)
df=df.append(new_df,ignore_index = True)
#drop all variables that contain all NANs
df.dropna(axis=1,how='all', inplace=True)
#reset the index
df.reset_index(drop=True, inplace=True)
#drop columns with all zeros in pandas dataframe
df=df.T[(df!=0).any()].T
#keep columns with missing values < 30%
df = df.loc[:, df.isnull().mean() < .3]
print("The shape of the merged dataframe is ",df.shape)
return df
#drop all columns that contain location information (if any)
def drop_location(df):
print(df.shape)
df = df[df.columns.drop(list(df.filter(regex='location')))]
df = df[df.columns.drop(list(df.filter(regex='latitude')))]
df = df[df.columns.drop(list(df.filter(regex='lonitude')))]
print(df.shape)
return df
def select_k_features(X_train_scaled,X_test_scaled,y_train,k):
selection = SelectKBest(mutual_info_classif, k)
X_train = selection.fit_transform(X_train_scaled,y_train)
X_test = selection.transform(X_test_scaled)
return X_train, X_test
def print_results(accu, bl_accu, prec, rec_, spec_, roc_, f1_):
print('.....................')
print("Average Accuracy: %.2f%% (%.2f)" % (np.mean(accu), np.std(accu)))
print("Average Balanced_accuracy: %.2f%% (%.2f)" % (np.mean(bl_accu),np.std(bl_accu)))
print("Average Precision: %.2f%% (%.2f)" % (np.mean(prec),np.std(prec)))
print("Average Recall: %.2f%% (%.2f)" % (np.mean(rec_),np.std(rec_)))
print("Average Specificity: %.2f%% (%.2f)" % (np.mean(spec_),np.std(spec_)))
print("Average ROC AUC: %.2f%% (%.2f)" % (np.mean(roc_),np.std(roc_)))
print("Average F1 score: %.2f%% (%.2f)" % (np.mean(f1_),np.std(f1_)))
print('..................................................')
print('\n')
pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', LogisticRegression())])
search_space = [{'selector__k': [ 50, 70, 90]},
{'classifier': [LogisticRegression(solver='lbfgs')],
'classifier__C': [0.01, 0.1, 1.0],
'classifier__penalty': ['l1', 'l2', None],
'classifier__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'classifier__max_iter':[100, 150, 200],
'classifier__class_weight':[None, 'balanced']},
{'classifier': [RandomForestClassifier()],
'classifier__max_depth': [5, 10, 30, None],
'classifier__criterion':['gini','entropy'],
'classifier__bootstrap': [True],
'classifier__max_features':['log2', None],
'classifier__n_estimators': [50, 100, 200, 300, 400]},
{'classifier': [MLPClassifier(random_state=1, early_stopping=True)],
'classifier__hidden_layer_sizes' : [(50, 50, 50), (50, 100, 50), (20, 20, 20), (30, ), (50,),(100,)],
'classifier__activation' : ['tanh', 'relu', 'logistic'],
'classifier__max_iter':[50, 100, 150, 200, 300],
'classifier__solver': ['sgd', 'adam', 'lbfgs'],
'classifier__alpha': [0.0001, 0.001, 0.05]},
{'classifier': [CatBoostClassifier(random_seed=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2]},
{'classifier': [XGBClassifier(random_state=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2],
'classifier__colsample_bytree':[.5, .75, 1],
'classifier__max_depth': np.arange(3, 6, 10),
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
scorers = {
'precision_score': make_scorer(precision_score, average='macro'),
'recall_score': make_scorer(recall_score, average='macro'),
'accuracy_score': make_scorer(accuracy_score, average='macro')
}
scorer = make_scorer(f1_score, average = 'micro')
LR_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', LogisticRegression())])
LR_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [LogisticRegression(solver='lbfgs')],
'classifier__C': [0.01, 0.1, 1.0],
'classifier__penalty': ['l1', 'l2', None],
'classifier__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'classifier__max_iter':[100, 150, 200],
'classifier__class_weight':[None, 'balanced']}]
################################################################################
RF_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', RandomForestClassifier())])
RF_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [RandomForestClassifier()],
'classifier__max_depth': [5, 10, 30, None],
'classifier__criterion':['gini','entropy'],
'classifier__bootstrap': [True],
'classifier__max_features':['log2', None],
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
################################################################################
MLP_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', MLPClassifier(random_state=1, early_stopping=True))])
MLP_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [MLPClassifier(random_state=1, early_stopping=True)],
'classifier__hidden_layer_sizes' : [(50, 50, 50), (50, 100, 50), (20, 20, 20), (30, ), (50,),(100,)],
'classifier__activation' : ['tanh', 'relu', 'logistic'],
'classifier__max_iter':[50, 100, 150, 200, 300],
'classifier__solver': ['sgd', 'adam', 'lbfgs'],
'classifier__alpha': [0.0001, 0.001, 0.05]}]
################################################################################
CB_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', CatBoostClassifier(random_seed=1))])
CB_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [CatBoostClassifier(random_seed=1, verbose=False)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2]}]
#'iterations': Integer(10, 1000),
# 'depth': Integer(1, 8),
# 'learning_rate': Real(0.01, 1.0, 'log-uniform'),
# 'random_strength': Real(1e-9, 10, 'log-uniform'),
# 'bagging_temperature': Real(0.0, 1.0),
# 'border_count': Integer(1, 255),
# 'l2_leaf_reg': Integer(2, 30),
# 'scale_pos_weight':Real(0.01, 1.0, 'uniform')
################################################################################
XGB_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', XGBClassifier(random_state=1))])
XGB_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [XGBClassifier(random_state=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2],
'classifier__colsample_bytree':[.5, .75, 1],
'classifier__max_depth': np.arange(3, 6, 10),
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
p_list=[8,10,12,13,15,20,21,25, 27, 33,35,40,46,48,49,52,54,55]
# make a predifined CV split (test_fold)
test_fold = []
for i in range(nfolds):
p_test = p_list[i*3:i*3+3]
df_test = merge_dataframes(p_test)
tst = [i] * df_test.shape[0]
test_fold= test_fold + tst
ps = PredefinedSplit(test_fold)
# df contains all persons' data in one dataset
df = merge_dataframes(p_list)
df = prep_data(df)
# remove day_of_month variable if present in data
if 'day_of_month' in df.columns:
drop_col=['day_of_month']
df=drop_cols(df, drop_col)
#drop all columns that contain location information
df = drop_location(df)
labels = list(df.columns)
labels.remove('emotion')
X = df[labels]
y = df['emotion']
def grid_search_wrapper(pipe = pipe, search_space = search_space, verbose= False,refit_score=scorer):
"""
fits a GridSearchCV classifiers using refit_score for optimization
prints classifier performance metrics
"""
#cross_validation = StratifiedKFold(n_splits=5, shuffle=True, random_state=random_state)
cross_validation = ps
grid_search = GridSearchCV(pipe, search_space, cv=cross_validation, verbose=verbose, n_jobs = -1) #scoring=scorer, refit=scorer
grid_search.fit(X, y)
return grid_search
# do gird search for best parameters
pipeline_grid_search_RF = grid_search_wrapper(pipe = RF_pipe, search_space = RF_search_space, verbose=2)
pipeline_grid_search_XGB = grid_search_wrapper(pipe = XGB_pipe, search_space = XGB_search_space, verbose=2)
pipeline_grid_search_LR = grid_search_wrapper(pipe = LR_pipe, search_space = LR_search_space, verbose=2)
pipeline_grid_search_MLP = grid_search_wrapper(pipe = MLP_pipe, search_space = MLP_search_space, verbose=2)
pipeline_grid_search_CB = grid_search_wrapper(pipe = CB_pipe, search_space = CB_search_space, verbose=False)
print(pipeline_grid_search_RF.best_estimator_)
print(pipeline_grid_search_RF.best_score_)
print(pipeline_grid_search_XGB.best_estimator_)
print(pipeline_grid_search_XGB.best_score_)
print(pipeline_grid_search_LR.best_estimator_)
print(pipeline_grid_search_LR.best_score_)
print(pipeline_grid_search_CB.best_estimator_)
print(pipeline_grid_search_CB.best_score_)
print(pipeline_grid_search_MLP.best_estimator_)
print(pipeline_grid_search_MLP.best_score_)
# best models
LR_model = LogisticRegression(C=0.01, class_weight=None, dual=False,
fit_intercept=True, intercept_scaling=1,
l1_ratio=None, max_iter=200,
multi_class='auto', n_jobs=None,
penalty='l1', random_state=None,
solver='liblinear', tol=0.0001, verbose=0,
warm_start=False)
RF_model = RandomForestClassifier(bootstrap=True, ccp_alpha=0.0,
class_weight=None, criterion='gini',
max_depth=10, max_features='log2',
max_leaf_nodes=None, max_samples=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=50, n_jobs=None,
oob_score=False, random_state=None,
verbose=0, warm_start=False)
MLP_model = MLPClassifier(activation='logistic', alpha=0.0001,
batch_size='auto', beta_1=0.9, beta_2=0.999,
early_stopping=True, epsilon=1e-08,
hidden_layer_sizes=(50, 50, 50),
learning_rate='constant',
learning_rate_init=0.001, max_fun=15000,
max_iter=50, momentum=0.9, n_iter_no_change=10,
nesterovs_momentum=True, power_t=0.5,
random_state=1, shuffle=True, solver='sgd',
tol=0.0001, validation_fraction=0.1,
verbose=False, warm_start=False)
XGB_model = XGBClassifier(base_score=0.5, booster='gbtree',
colsample_bylevel=1, colsample_bynode=1,
colsample_bytree=0.75, gamma=0,
learning_rate=0.15, max_delta_step=0,
max_depth=3, min_child_weight=1, missing=None,
n_estimators=400, n_jobs=1, nthread=None,
objective='multi:softprob', random_state=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
seed=None, silent=None, subsample=1,
verbosity=1)
CB_model = CatBoostClassifier(random_seed=1, verbose=False,learning_rate= 0.1)
best_models = {} # dictionary of best models with best parameters
best_models['Logistic Regression'] = LR_model
best_models['RandomForest Classifier'] = RF_model
best_models['MLP Classifier'] = MLP_model
best_models['XGBoost Classifier'] = XGB_model
best_models['CatBoost Classifier'] = CB_model
n_features = [90, 90, 90, 90, 90]
nfolds = 6
rnd_state=42
# this is to get all the detailed performance meterics after selecting the best model parameters
k_i = -1
for model_name, model in best_models.items():
k_i = k_i + 1
accu = []
prec = []
rec_ = []
f1_ = []
bl_accu = []
roc_ = []
spec_ = []
i = 1
for train_index, test_index in ps.split():
#print("fold", i)
i+=1
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
#scale features
X_train_scaled, X_test_scaled= scale_data(X_train, X_test)
#feature selection
X_train, X_test = select_k_features(X_train_scaled,X_test_scaled,y_train,k=n_features[k_i])
#oversample training data
#X_train_imb,y_train_imb=over_sample_SMOTE(X_train, y_train)
#X_train_imb,y_train_imb=over_sample_SMOTENC(X_train, y_train)
X_train_imb,y_train_imb=over_sample_SVMSMOTE(X_train, y_train)
# train model on imbalance-handled data
model.fit(X_train_imb, y_train_imb)
#train model on imbalance data
#model.fit(X_train, y_train)
# test model, measure class label and probability score
y_pred = model.predict(X_test)
y_scores = model.predict_proba(X_test)
#calculate metrices
accuracy = accuracy_score(y_test, y_pred)
bl_accuracy = balanced_accuracy_score(y_test, y_pred)
precision=precision_score(y_test, y_pred, average='macro',labels=np.unique(y_pred)) #'weighted', 'micro', 'micro'
recall=recall_score(y_test, y_pred, average='macro',labels=np.unique(y_pred))
#kappa=cohen_kappa_score(y_pred, y_test)
spec=specificity_score(y_test, y_pred, average='macro',labels=np.unique(y_pred))
#roc=roc_auc_score(y_test, y_scores, multi_class='ovr', average='macro')
f1=f1_score(y_test, y_pred, average='macro',labels=np.unique(y_pred))
# sometimes not all classes are present in the test set
not_present = list(set(model.classes_)-set(y_test.unique()))
# get that class
if not_present:
not_present=not_present[0] # get the element then its index
ind= list(model.classes_).index(not_present)
y_scores = np.delete(y_scores,ind,1) # delete it from the scores
y_scores = y_scores / y_scores.sum(axis=1)[:,None] #make sure sum equals ro 0 (sum of probabilities)
else:
pass
roc=roc_auc_score(y_test, y_scores, multi_class='ovr', average='macro')
ac=accuracy * 100.0
pr=precision*100
rc=recall*100
f1_p=f1*100
bl_ac=bl_accuracy*100
roc=roc*100
spec=spec*100
accu.append(ac)
prec.append(pr)
rec_.append(rc)
f1_.append(f1_p)
bl_accu.append(bl_ac)
roc_.append(roc)
spec_.append(spec)
print('Restuls for: ', model_name)
print_results(accu, bl_accu, prec, rec_, spec_, roc_, f1_)
###Output
/Users/majed_al-jefri/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1859: UserWarning: y_pred contains classes not in y_true
warnings.warn('y_pred contains classes not in y_true')
/Users/majed_al-jefri/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
/Users/majed_al-jefri/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1859: UserWarning: y_pred contains classes not in y_true
warnings.warn('y_pred contains classes not in y_true')
/Users/majed_al-jefri/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
|
test/33_art36_substancias.ipynb | ###Markdown
Art. 36 A água potável deve estar em conformidade com o padrão de substâncias químicas que representam risco à saúde e cianotoxinas, expressos nos Anexos 9 e 10 e demais disposições deste Anexo.§ 1º No caso de adição de flúor (fluoretação), os valores recomendados para concentração de íon fluoreto devem observar o anexo XXI da Portaria de Consolidação nº 5/2017, não podendo ultrapassar o VMP expresso no Anexo 9 deste Anexo.§ 2º O VMP de cada cianotoxina referida no Anexo 10 é referente à concentração total, considerando as frações intracelular e extracelular.
###Code
import os
import re
import sys
import pprint
import pandas as pd
from scipy.stats import gmean
from dateutil.relativedelta import relativedelta
from paths import *
# Parameters
cod_ibge = '3548906' # São Carlos
cod_ibge = '3526902' # Limeira
cod_ibge = '3501608' # Americana
# Read Table
df_bruta = pd.read_excel(
os.path.join(output_path, str(cod_ibge), 'dados brutos', 'controle', 'controle_semestral.xlsx')
)
# Filtra Apenas SAAs
df = df_bruta.loc[df_bruta['Tipo Da Forma De Abastecimento'] == 'SAA']
df.info()
list(df.columns)
# Filtra Apenas Último Ano
df = df[df['Ano De Referência'] == max(df['Ano De Referência'])].copy()
###Output
_____no_output_____
###Markdown
Artigo 32
###Code
set(df['Parâmetro'])
for i, row in df.iterrows():
a = row['Parâmetro'].split(' - ')
print(len(a))
###Output
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
###Markdown
Lixos
###Code
#df = df[df['Parâmetro'] == 'Escherichia coli'].copy()
df = df[df['Parâmetro'].str.contains('Cloro')].copy()
df.head()
set(df['Ponto De Monitoramento'])
df = df[df['Ponto De Monitoramento'] == 'SAÍDA DO TRATAMENTO'].copy()
df.head()
df = df[['Ano De Referência', 'Mês De Referência', 'Campo', 'Valor']].copy()
df = df.sort_values(by=['Ano De Referência', 'Mês De Referência', 'Campo']).copy()
df.head()
###Output
_____no_output_____
###Markdown
Americana não tinha amostras no Ponto de captação....{'SAÍDA DO TRATAMENTO', 'SISTEMA DE DISTRIBUIÇÃO'}
###Code
df['Valor'] = df['Valor'].astype(str).str.replace(',','.')
df['Valor'] = df['Valor'].astype(float).fillna(0.0)
df.head()
###Output
_____no_output_____ |
examples_photonqat/StateTeleportation.ipynb | ###Markdown
State teleportation
###Code
import photonqat as pq
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Photonqat
###Code
r = 2
G = pq.Gaussian(3)
G.D(0, 1 + 0.5j) # state to teleport
G.S(1, -r)
G.S(2, r)
G.BS(1, 2, np.pi/4) # 50:50 beam splitter
G.BS(0, 1, np.pi/4) # 50:50 beam splitter
G.MeasX(0)
G.MeasP(1)
G.X(2, G.Creg(0, "x", scale = np.sqrt(2)))
G.Z(2, G.Creg(1, "p", scale = np.sqrt(2)))
G.run()
G.Wigner(2) # plot
print('measured x =', G.Creg(0, "x").read())
print('measured p =', G.Creg(1, "p").read())
print('teleported mu =', G.mean(2)) # mu of qumode 0
###Output
_____no_output_____ |
notebooks/Data Preparation Large File v1.ipynb | ###Markdown
Groupby apply on large (relational) data set. Attentions all writen functions assume a data frame where the date is sorted!
###Code
pd_JH_data=pd.read_csv('C:/Users/LATITUDE/ads_covid-19/data/processed/COVID_relational_confirmed.csv',sep=';',parse_dates=[0])
pd_JH_data=pd_JH_data.sort_values('date',ascending=True).reset_index(drop=True).copy()
pd_JH_data.head()
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data=pd_JH_data[((pd_JH_data['country']=='US')|
(pd_JH_data['country']=='Germany'))&
(pd_JH_data['date']>'2020-03-20')]
test_data.head()
test_data.groupby(['country']).agg(np.max)
# %load C:/Users/LATITUDE/ads_covid-19/src/features/build_features.py
import numpy as np
from sklearn import linear_model
reg = linear_model.LinearRegression(fit_intercept=True)
def get_doubling_time_via_regression(in_array):
y=np.array(in_array)
x=np.arange(-1,2).reshape(-1,1)
assert len(in_array)==3
reg.fit(x,y)
intercept=reg.intercept_
slope=reg.coef_
return intercept/slope
test_data.groupby(['state','country']).agg(np.max)
def rolling_reg(df_input,col='confirmed'):
''' input has to be a data frame'''
''' return is single series (mandatory for group by apply)'''
days_back=3
result=df_input[col].rolling(
window=days_back,
min_periods=days_back).apply(get_doubling_time_via_regression,raw=False)
return result
test_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed')
pd_DR_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed').reset_index()
pd_DR_result=pd_DR_result.rename(columns={'confirmed':'confirmed_DR',
'level_2':'index'})
pd_DR_result.head()
pd_JH_data=pd_JH_data.reset_index()
pd_JH_data.head()
pd_result_larg=pd.merge(pd_JH_data,pd_DR_result[['index','confirmed_DR']],on=['index'],how='left')
pd_result_larg.head()
#pd_result_larg[pd_result_larg['country']=='Germany']
###Output
_____no_output_____
###Markdown
Filtering the data with groupby apply
###Code
from scipy import signal
def savgol_filter(df_input,column='confirmed',window=5):
''' Savgol Filter which can be used in groupby apply function
it ensures that the data structure is kept'''
window=5,
degree=1
df_result=df_input
filter_in=df_input[column].fillna(0) # attention with the neutral element here
result=signal.savgol_filter(np.array(filter_in),
5, # window size used for filtering
1)
df_result[column+'_filtered']=result
return df_result
pd_filtered_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(savgol_filter).reset_index()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_result[['index','confirmed_filtered']],on=['index'],how='left')
pd_result_larg.head()
###Output
_____no_output_____
###Markdown
Filtered doubling rate
###Code
pd_filtered_doubling=pd_result_larg[['state','country','confirmed_filtered']].groupby(['state','country']).apply(rolling_reg,'confirmed_filtered').reset_index()
pd_filtered_doubling=pd_filtered_doubling.rename(columns={'confirmed_filtered':'confirmed_filtered_DR',
'level_2':'index'})
pd_filtered_doubling.tail()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_doubling[['index','confirmed_filtered_DR']],on=['index'],how='left')
pd_result_larg.tail()
mask=pd_result_larg['confirmed']>100
pd_result_larg['confirmed_filtered_DR']=pd_result_larg['confirmed_filtered_DR'].where(mask, other=np.NaN)
pd_result_larg[pd_result_larg['country']=='Germany'].tail()
pd_result_larg.to_csv('C:/Users/LATITUDE/ads_covid-19/data/processed/COVID_final_set.csv',sep=';',index=False)
pd_DR_result = pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg, 'confirmed').reset_index()
pd_DR_result = pd_DR_result.rename(columns={'confirmed':'doubling_rate', 'level_2':'index'})
pd_DR_result.head()
pd_JH_data=pd_JH_data.reset_index().head()
pd_JH_data.head()
pd.merge(pd_JH_data,pd_DR_result[['index','doubling_rate']],on=['index'],how='left')
###Output
_____no_output_____ |
Lab 3 - Emotions Classification/[SVA]_Lab_3_Emotions_Classification.ipynb | ###Markdown
АНАЛИЗ ЗВУКА И ГОЛОСА**Преподаватель**: Рыбин Сергей Витальевич**Группа**: 6304**Студент**: Белоусов Евгений Олегович Классификация эмоций*Необоходимый результат: неизвестно*
###Code
import os
import IPython
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import librosa
import librosa.display
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from tqdm.notebook import tqdm
from tensorflow.keras import losses, models, optimizers
from tensorflow.keras.activations import relu, softmax
from tensorflow.keras.callbacks import (EarlyStopping, ModelCheckpoint, TensorBoard)
from tensorflow.keras.layers import (Input, Dense, Convolution2D, BatchNormalization,
Flatten, MaxPool2D, Activation)
from tensorflow.keras.utils import Sequence
from tensorflow.keras import backend as K
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# from google.colab import drive
# drive.mount('/content/drive')
# Ручная часть работы - директория с набором аудиофайлов, набор меток классов, учёт разновидностей имён файлов
predictions = "predictions"
directory = "./content/drive/MyDrive/Training/"
labels = ["angry",
"chilled",
"happy",
"neutral",
"sad"]
num_classes = len(labels)
# Параметры конфигурации для будущей модели нейросети
class Config(object):
def __init__(self,
sampling_rate=16000, audio_duration=7, n_classes=10, use_mfcc=True,
n_mfcc=20, n_folds=10, n_features=100, learning_rate=0.0001, max_epochs=50):
self.sampling_rate = sampling_rate
self.audio_duration = audio_duration
self.n_classes = n_classes
self.use_mfcc = use_mfcc
self.n_mfcc = n_mfcc
self.n_folds = n_folds
self.learning_rate = learning_rate
self.max_epochs = max_epochs
self.n_features = n_features
self.audio_length = self.sampling_rate * self.audio_duration
if self.use_mfcc:
self.dim = (self.n_mfcc, 1 + int(np.floor(self.audio_length / 512)), 1)
else:
self.dim = (self.audio_length, 1)
# Подготовка датафрейма
def prepare_dataframe(directory, folder, df):
dirpath = directory + folder
files = ([f.path for f in os.scandir(dirpath) if f.is_file()])
# Создание датафрейма по предоставленной в условии задачи схеме
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
label = folder
# Добавляем обработанный аудиофайл в датафрейм
row = pd.Series([filename, label], index = df.columns)
df = df.append(row, ignore_index=True)
return df
# Извлечение признаков из набора аудиофайлов
def prepare_data(config, directory, folder, X):
dirpath = directory + folder
files = ([f.path for f in os.scandir(dirpath) if f.is_file()])
# Задаём длительность аудиофайла
input_length = config.audio_length
i = 0
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
data, sr = librosa.load(path, sr=config.sampling_rate)
# Обрезка/приведение длительности аудиофайла к указанной в параметрах конфигурации
if len(data) > input_length:
max_offset = len(data) - input_length
offset = np.random.randint(max_offset)
data = data[offset:(input_length+offset)]
else:
if input_length > len(data):
max_offset = input_length - len(data)
offset = np.random.randint(max_offset)
else:
offset = 0
data = np.pad(data, (offset, input_length - len(data) - offset), "constant")
# Извлечение признаков MFCC с помощью библиотеки librosa
data = librosa.feature.mfcc(data, sr=config.sampling_rate, n_mfcc=config.n_mfcc)
data = np.expand_dims(data, axis=-1)
X[i,] = data
i = i + 1
return X
# Модель свёрточной нейросети
def get_2d_conv_model(config):
num_classes = config.n_classes
inp = Input(shape=(config.dim[0], config.dim[1], 1))
x = Convolution2D(32, (4,10), padding="same")(inp)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Convolution2D(32, (4,10), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Convolution2D(32, (4,10), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Flatten()(x)
x = Dense(64)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
out = Dense(num_classes, activation=softmax)(x)
model = models.Model(inputs=inp, outputs=out)
opt = optimizers.Adam(config.learning_rate)
model.compile(optimizer=opt, loss=losses.SparseCategoricalCrossentropy(), metrics=['acc'])
return model
# Матрица ошибок классификации
def plot_confusion_matrix(predictions, y):
max_test = y
max_predictions = np.argmax(predictions, axis=1)
matrix = confusion_matrix(max_test, max_predictions)
plt.figure(figsize=(12, 8))
sns.heatmap(matrix, xticklabels=labels, yticklabels=labels, annot=True,
linewidths = 0.1, fmt="d", cmap = 'YlGnBu');
plt.title("Матрица ошибок классификации", fontsize = 15)
plt.ylabel("Настоящий класс")
plt.xlabel("Предсказанный")
plt.show()
# Подготовим датафрейм
df = pd.DataFrame(columns=["filename", "label"])
for label in labels:
df = prepare_dataframe(directory, label, df)
df.head(-5)
# Сериализуем датафрейм в целях дальнейшей экономии времени
df.to_pickle("./content/drive/MyDrive/SVA_lab_3_dataframe.pkl")
# Десериализация ранее сохранённого датафрейма
df = pd.read_pickle("./content/drive/MyDrive/SVA_lab_3_dataframe.pkl")
# Подсчёт количества аудиозаписей каждого класса
df["label"].value_counts()
# Представим значения меток классов в виде целых чисел
encode = LabelEncoder()
encoded_labels = encode.fit_transform(df['label'].to_numpy())
df = df.assign(label=encoded_labels)
df.head()
# Задаём параметры конфигурации
config = Config(n_classes=num_classes, n_folds=10, n_mfcc=20)
X = np.empty(shape=(df.shape[0], config.dim[0], config.dim[1], 1))
for label in labels:
X_train = prepare_data(config, directory, label, X)
print(X_train.shape)
# Нормализация данных
mean = np.mean(X_train, axis=0)
std = np.std(X_train, axis=0)
X_train = (X_train - mean)/std
X_train
# ПРОВЕРКА НА ТЕСТОВОМ НАБОРЕ ДАННЫХ
files = ([f.path for f in os.scandir("./content/drive/MyDrive/Test") if f.is_file()])
# Создание датафрейма по предоставленной в условии задачи схеме
submission = pd.DataFrame(columns=["fname"])
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
# Добавляем имя аудиофайла в датафрейм
row = pd.Series([filename], index = submission.columns)
submission = submission.append(row, ignore_index=True)
submission.head()
X = np.empty(shape=(submission.shape[0], config.dim[0], config.dim[1], 1))
X_test = prepare_data(config, "./content/drive/MyDrive/", "Test", X)
print(X_test.shape)
# Нормализация данных
mean = np.mean(X_test, axis=0)
std = np.std(X_test, axis=0)
X_test = (X_test - mean)/std
X_test
if not os.path.exists(predictions):
os.mkdir(predictions)
if os.path.exists("./content/drive/MyDrive/" + predictions):
shutil.rmtree("./content/drive/MyDrive/" + predictions)
# Для кросс-валидации используется StratifiedKFold - разновдность KFold алгоритма, которая возвращает
# стратифицированные папки c данными: каждый набор в папке содержит примерно такой же процент выборок каждого целевого класса,
# что и полный набор.
skf = StratifiedKFold(n_splits=config.n_folds)
y_train = df["label"].values
y_train = np.stack(y_train[:])
model = get_2d_conv_model(config)
i = 0
for train_split, val_split in skf.split(X_train, y_train):
K.clear_session()
# Разделение имеющегося набора данных на тренировочную и валидационные выборки
X, y, X_val, y_val = X_train[train_split], y_train[train_split], X_train[val_split], y_train[val_split]
# Callback-функции для модели Keras
# В ходе обучения сохраняем веса лучшей модели для потенциального дальнейшего использования
checkpoint = ModelCheckpoint('best_%d.h5'%i, monitor='val_loss', verbose=1, save_best_only=True)
early = EarlyStopping(monitor="val_loss", mode="min", patience=5)
callbacks_list = [checkpoint, early]
print("#"*50)
print("Fold: ", i)
model = get_2d_conv_model(config)
history = model.fit(X, y, validation_data=(X_val, y_val), callbacks=callbacks_list, batch_size=256, epochs=config.max_epochs)
model.load_weights('best_%d.h5'%i)
# Сохраняем предсказания модели по тренировочным данным
print("TRAIN PREDICTIONS: ", i)
predictions = model.predict(X_train, batch_size=256)
save_train_preds_path = "./predictions/train_predictions_{:d}.npy".format(i)
np.save(save_train_preds_path, predictions)
plot_confusion_matrix(predictions, y_train)
# Сохраняем предсказания модели по тестовым данным
print("TEST PREDICTIONS: ", i)
predictions = model.predict(X_test, batch_size=256)
save_test_preds_path = "./predictions/test_predictions_{:d}.npy".format(i)
np.save(save_test_preds_path, predictions)
# # Создание файла с результатами (submission)
# top_3 = np.array(labels)[np.argsort(-predictions, axis=1)[:, :3]]
# predicted_labels = [' '.join(list(x)) for x in top_3]
# df_test['label'] = predicted_labels
# save_preds_path = "./predictions/predictions_{:d}.npy".format(i)
# df_test[['label']].to_csv(save_preds_path)
j = 0
for prob in predictions:
#print(prob)
#print(np.argmax(prob))
submission.loc[j,'score'] = max(prob)
prob_index = list(prob).index(max(prob))
#print(prob_index)
submission.loc[j,'label'] = prob_index
j += 1
submission_result = submission.copy()
submission_result['label'] = encode.inverse_transform(np.array(submission['label']).astype(int))
submission = submission_result
save_submission_path = "./predictions/submission_{:d}.npy".format(i)
submission.to_csv(save_submission_path.format(i), index=False)
i += 1
###Output
##################################################
Fold: 0
Epoch 1/50
3/4 [=====================>........] - ETA: 0s - loss: 1.7237 - acc: 0.1628
Epoch 00001: val_loss improved from inf to 1.59251, saving model to best_0.h5
4/4 [==============================] - 4s 757ms/step - loss: 1.7206 - acc: 0.1656 - val_loss: 1.5925 - val_acc: 0.0930
Epoch 2/50
3/4 [=====================>........] - ETA: 0s - loss: 1.4694 - acc: 0.3802
Epoch 00002: val_loss improved from 1.59251 to 1.53576, saving model to best_0.h5
4/4 [==============================] - 3s 691ms/step - loss: 1.4703 - acc: 0.3803 - val_loss: 1.5358 - val_acc: 0.3721
Epoch 3/50
3/4 [=====================>........] - ETA: 0s - loss: 1.3805 - acc: 0.3971
Epoch 00003: val_loss improved from 1.53576 to 1.49723, saving model to best_0.h5
4/4 [==============================] - 3s 682ms/step - loss: 1.3819 - acc: 0.3959 - val_loss: 1.4972 - val_acc: 0.4070
Epoch 4/50
3/4 [=====================>........] - ETA: 0s - loss: 1.3428 - acc: 0.4036
Epoch 00004: val_loss improved from 1.49723 to 1.46991, saving model to best_0.h5
4/4 [==============================] - 3s 686ms/step - loss: 1.3467 - acc: 0.4023 - val_loss: 1.4699 - val_acc: 0.4070
Epoch 5/50
3/4 [=====================>........] - ETA: 0s - loss: 1.3185 - acc: 0.4115
Epoch 00005: val_loss improved from 1.46991 to 1.45705, saving model to best_0.h5
4/4 [==============================] - 3s 698ms/step - loss: 1.3192 - acc: 0.4101 - val_loss: 1.4571 - val_acc: 0.4070
Epoch 6/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2930 - acc: 0.4193
Epoch 00006: val_loss improved from 1.45705 to 1.44930, saving model to best_0.h5
4/4 [==============================] - 3s 687ms/step - loss: 1.2907 - acc: 0.4191 - val_loss: 1.4493 - val_acc: 0.4186
Epoch 7/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2761 - acc: 0.4193
Epoch 00007: val_loss improved from 1.44930 to 1.44724, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.2758 - acc: 0.4204 - val_loss: 1.4472 - val_acc: 0.4186
Epoch 8/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2617 - acc: 0.4128
Epoch 00008: val_loss improved from 1.44724 to 1.44694, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.2640 - acc: 0.4114 - val_loss: 1.4469 - val_acc: 0.4186
Epoch 9/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2449 - acc: 0.4180
Epoch 00009: val_loss improved from 1.44694 to 1.44666, saving model to best_0.h5
4/4 [==============================] - 3s 679ms/step - loss: 1.2494 - acc: 0.4166 - val_loss: 1.4467 - val_acc: 0.4186
Epoch 10/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2308 - acc: 0.4245
Epoch 00010: val_loss improved from 1.44666 to 1.44311, saving model to best_0.h5
4/4 [==============================] - 3s 681ms/step - loss: 1.2332 - acc: 0.4269 - val_loss: 1.4431 - val_acc: 0.4186
Epoch 11/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2160 - acc: 0.4180
Epoch 00011: val_loss improved from 1.44311 to 1.43768, saving model to best_0.h5
4/4 [==============================] - 3s 687ms/step - loss: 1.2187 - acc: 0.4179 - val_loss: 1.4377 - val_acc: 0.4186
Epoch 12/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2119 - acc: 0.4245
Epoch 00012: val_loss improved from 1.43768 to 1.42921, saving model to best_0.h5
4/4 [==============================] - 3s 684ms/step - loss: 1.2104 - acc: 0.4230 - val_loss: 1.4292 - val_acc: 0.4419
Epoch 13/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2017 - acc: 0.4076
Epoch 00013: val_loss improved from 1.42921 to 1.41948, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.2071 - acc: 0.4049 - val_loss: 1.4195 - val_acc: 0.4419
Epoch 14/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2068 - acc: 0.4102
Epoch 00014: val_loss improved from 1.41948 to 1.41089, saving model to best_0.h5
4/4 [==============================] - 3s 677ms/step - loss: 1.2059 - acc: 0.4114 - val_loss: 1.4109 - val_acc: 0.4419
Epoch 15/50
3/4 [=====================>........] - ETA: 0s - loss: 1.2006 - acc: 0.4232
Epoch 00015: val_loss improved from 1.41089 to 1.40499, saving model to best_0.h5
4/4 [==============================] - 3s 681ms/step - loss: 1.2016 - acc: 0.4230 - val_loss: 1.4050 - val_acc: 0.4419
Epoch 16/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1897 - acc: 0.4167
Epoch 00016: val_loss improved from 1.40499 to 1.39534, saving model to best_0.h5
4/4 [==============================] - 3s 678ms/step - loss: 1.1883 - acc: 0.4191 - val_loss: 1.3953 - val_acc: 0.4419
Epoch 17/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1863 - acc: 0.4245
Epoch 00017: val_loss improved from 1.39534 to 1.38275, saving model to best_0.h5
4/4 [==============================] - 3s 681ms/step - loss: 1.1866 - acc: 0.4256 - val_loss: 1.3828 - val_acc: 0.4419
Epoch 18/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1852 - acc: 0.4245
Epoch 00018: val_loss improved from 1.38275 to 1.37861, saving model to best_0.h5
4/4 [==============================] - 3s 681ms/step - loss: 1.1864 - acc: 0.4230 - val_loss: 1.3786 - val_acc: 0.4419
Epoch 19/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1867 - acc: 0.4232
Epoch 00019: val_loss improved from 1.37861 to 1.37480, saving model to best_0.h5
4/4 [==============================] - 3s 682ms/step - loss: 1.1893 - acc: 0.4230 - val_loss: 1.3748 - val_acc: 0.4419
Epoch 20/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1815 - acc: 0.4245
Epoch 00020: val_loss improved from 1.37480 to 1.36905, saving model to best_0.h5
4/4 [==============================] - 3s 675ms/step - loss: 1.1810 - acc: 0.4256 - val_loss: 1.3691 - val_acc: 0.4419
Epoch 21/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1796 - acc: 0.4206
Epoch 00021: val_loss improved from 1.36905 to 1.35651, saving model to best_0.h5
4/4 [==============================] - 3s 682ms/step - loss: 1.1822 - acc: 0.4204 - val_loss: 1.3565 - val_acc: 0.4419
Epoch 22/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1722 - acc: 0.4167
Epoch 00022: val_loss improved from 1.35651 to 1.33854, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.1716 - acc: 0.4179 - val_loss: 1.3385 - val_acc: 0.4419
Epoch 23/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1582 - acc: 0.4284
Epoch 00023: val_loss improved from 1.33854 to 1.33254, saving model to best_0.h5
4/4 [==============================] - 3s 683ms/step - loss: 1.1580 - acc: 0.4295 - val_loss: 1.3325 - val_acc: 0.4302
Epoch 24/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1582 - acc: 0.4245
Epoch 00024: val_loss improved from 1.33254 to 1.33048, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.1561 - acc: 0.4269 - val_loss: 1.3305 - val_acc: 0.4419
Epoch 25/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1502 - acc: 0.4245
Epoch 00025: val_loss improved from 1.33048 to 1.32961, saving model to best_0.h5
4/4 [==============================] - 3s 680ms/step - loss: 1.1499 - acc: 0.4256 - val_loss: 1.3296 - val_acc: 0.4302
Epoch 26/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1474 - acc: 0.4180
Epoch 00026: val_loss improved from 1.32961 to 1.32696, saving model to best_0.h5
4/4 [==============================] - 3s 684ms/step - loss: 1.1504 - acc: 0.4179 - val_loss: 1.3270 - val_acc: 0.4419
Epoch 27/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1478 - acc: 0.4010
Epoch 00027: val_loss did not improve from 1.32696
4/4 [==============================] - 3s 665ms/step - loss: 1.1476 - acc: 0.4010 - val_loss: 1.3296 - val_acc: 0.4535
Epoch 28/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1376 - acc: 0.4336
Epoch 00028: val_loss did not improve from 1.32696
4/4 [==============================] - 3s 666ms/step - loss: 1.1373 - acc: 0.4334 - val_loss: 1.3375 - val_acc: 0.4535
Epoch 29/50
3/4 [=====================>........] - ETA: 0s - loss: 1.1343 - acc: 0.4310
Epoch 00029: val_loss did not improve from 1.32696
4/4 [==============================] - 3s 663ms/step - loss: 1.1331 - acc: 0.4321 - val_loss: 1.3525 - val_acc: 0.4419
|
predicoes/climatechange.ipynb | ###Markdown
Análise de mudanças climáticashttps://docs.microsoft.com/en-us/learn/modules/analyze-climate-data-with-azure-notebooks/0-introduction
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression
import seaborn as sns; sns.set()
print("Setup completo")
!curl https://a4r.blob.core.windows.net/public/notebook-resources.zip -o notebook-resources.zip
yearsBase, meanBase = np.loadtxt('/kaggle/input/climatechance/5-year-mean-1951-1980.csv', delimiter=',', usecols=(0, 1), unpack=True)
years, mean = np.loadtxt('/kaggle/input/climatechance/5-year-mean-1882-2014.csv', delimiter=',', usecols=(0, 1), unpack=True)
###Output
_____no_output_____
###Markdown
Plotando um Scatter
###Code
plt.scatter(yearsBase, meanBase)
plt.title('scatter plot of mean temp difference vs year')
plt.xlabel('years', fontsize=12)
plt.ylabel('mean temp difference', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Adicionando uma regressão linear com Numpy
###Code
# Creates a linear regression from the data points
m,b = np.polyfit(yearsBase, meanBase, 1)
# This is a simple y = mx + b line function
def f(x):
return m*x + b
# This generates the same scatter plot as before, but adds a line plot using the function above
plt.scatter(yearsBase, meanBase)
plt.plot(yearsBase, f(yearsBase))
plt.title('scatter plot of mean temp difference vs year')
plt.xlabel('years', fontsize=12)
plt.ylabel('mean temp difference', fontsize=12)
plt.show()
# Prints text to the screen showing the computed values of m and b
print(' y = {0} * x + {1}'.format(m, b))
plt.show()
###Output
_____no_output_____
###Markdown
Adicionando regressão linear com Scikit Learn
###Code
# Pick the Linear Regression model and instantiate it
model = LinearRegression(fit_intercept=True)
# Fit/build the model
model.fit(yearsBase[:, np.newaxis], meanBase)
mean_predicted = model.predict(yearsBase[:, np.newaxis])
# Generate a plot like the one in the previous exercise
plt.scatter(yearsBase, meanBase)
plt.plot(yearsBase, mean_predicted)
plt.title('scatter plot of mean temp difference vs year')
plt.xlabel('years', fontsize=12)
plt.ylabel('mean temp difference', fontsize=12)
plt.show()
print(' y = {0} * x + {1}'.format(model.coef_[0], model.intercept_))
###Output
_____no_output_____
###Markdown
Analisando com Seaborn
###Code
plt.scatter(years, mean)
plt.title('scatter plot of mean temp difference vs year')
plt.xlabel('years', fontsize=12)
plt.ylabel('mean temp difference', fontsize=12)
sns.regplot(yearsBase, meanBase)
plt.show()
###Output
_____no_output_____ |
DNN.ipynb | ###Markdown
**Import Libraries and modules**
###Code
# https://keras.io/
!pip install -q keras
import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Add
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.datasets import mnist
###Output
_____no_output_____
###Markdown
Load pre-shuffled MNIST data into train and test sets
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print (X_train.shape)
from matplotlib import pyplot as plt
%matplotlib inline
plt.imshow(X_train[0])
X_train = X_train.reshape(X_train.shape[0], 28, 28,1)
X_test = X_test.reshape(X_test.shape[0], 28, 28,1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
y_train[:10]
# Convert 1-dimensional class arrays to 10-dimensional class matrices
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
Y_train[:10]
from keras.layers import Activation
model = Sequential()
model.add(Convolution2D(256, 3, 3, activation='relu', input_shape=(28,28,1))) # 28 -> 26
model.add(Convolution2D(128, 3, 3, activation='relu'))# 26 -> 24
model.add(Convolution2D(128, 3, 3, activation='relu'))# 24 -> 22
model.add(Convolution2D(64, 3, 3, activation='relu'))# 22 -> 20
model.add(Convolution2D(64, 3, 3, activation='relu'))# 18
model.add(Convolution2D(64, 3, 3, activation='relu'))# 16
model.add(Convolution2D(64, 3, 3, activation='relu'))# 14
model.add(Convolution2D(32, 3, 3, activation='relu'))# 12
model.add(Convolution2D(32, 3, 3, activation='relu'))# 10
model.add(Convolution2D(32, 3, 3, activation='relu'))# 8
model.add(Convolution2D(16, 3, 3, activation='relu'))# 6
model.add(Convolution2D(16, 3, 3, activation='relu'))# 4
model.add(Convolution2D(10, 3, 3, activation='relu'))# 2
model.add(Convolution2D(10, 1, 1, activation='relu'))# 2
model.add(Convolution2D(10, 2))
model.add(Flatten())
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1)
score = model.evaluate(X_test, Y_test, verbose=0)
print(score)
y_pred = model.predict(X_test)
model.fit(X_train, Y_train, batch_size=32, epochs=20, initial_epoch = 10, verbose=1)
score = model.evaluate(X_test, Y_test, verbose=1)
print(score)
print(y_pred[:9])
print(y_test[:9])
layer_dict = dict([(layer.name, layer) for layer in model.layers])
import numpy as np
from matplotlib import pyplot as plt
from keras import backend as K
%matplotlib inline
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(X_train[2]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_1'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
vis_img_in_filter()
import numpy as np
from matplotlib import pyplot as plt
from keras import backend as K
%matplotlib inline
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(X_train[0]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_1'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
vis_img_in_filter()
import numpy as np
from matplotlib import pyplot as plt
from keras import backend as K
%matplotlib inline
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(X_train[2]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_2'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
vis_img_in_filter()
###Output
_____no_output_____
###Markdown
IntroductionThis is an Earth Engine TensorFlow notebook demonstrating workflows to train an automated land cover change prediction DNN model. Specifically, this notebook shows:1. Ingesting previously exported csv files into TFRecord format.2. Preparing the data for use in a TensorFlow model.2. Training and validating a simple model (Keras `Sequential` neural network) in TensorFlow.3. Making predictions on image data exported from Earth Engine in TFRecord format.4. Ingesting classified image data to Earth Engine in TFRecord format. Setup Install the Earth Engine client libraryThis only needs to be done once per notebook.
###Code
!pip install earthengine-api
###Output
_____no_output_____
###Markdown
Authenticate to Earth Engine
###Code
import ee
ee.Authenticate()
ee.Initialize()
# Test the earthengine command by getting help on upload.
!earthengine upload image -h
###Output
_____no_output_____
###Markdown
Google AuthenticationTo read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). You'll also need to authenticate as yourself with Earth Engine, so that you'll have access to your scripts, assets, etc.
###Code
from google.colab import auth, drive
auth.authenticate_user()
###Output
_____no_output_____
###Markdown
Mount Google Drive
###Code
ROOT = ('/content/drive')
drive.mount(ROOT)
###Output
_____no_output_____
###Markdown
Make sure python can reference our module library
###Code
import sys
sys.path.append('/content/drive/My Drive/repos/ACD_methods/EEcode/Python')
sys.path
###Output
_____no_output_____
###Markdown
Set up Tensorflow The default public runtime already has the tensorflow libraries we need installed. Before any operations from the TensorFlow API are used, import TensorFlow. Eager execution should be enabled by default as of TF v2.x[`tf.enable_eager_execution()` docs](https://www.tensorflow.org/api_docs/python/tf/enable_eager_execution).
###Code
import tensorflow as tf
tf.executing_eagerly()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Set up FoliumThe default public runtime already has the Folium library we will use for visualization. Import the library, check the version, and define the URL where Folium will look for Earth Engine generated map tiles.
###Code
import folium
print(folium.__version__)
# Define a method for displaying Earth Engine image tiles to a folium map.
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = "Map Data © Google Earth Engine",
name = name,
overlay = True,
control = True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
###Output
_____no_output_____
###Markdown
Get Training and Testing data from Earth EngineWe have previously exported csv files to Google Drive that contain per-pixel output from our change detection algorithms. This output consists of 6 metrics, and a field indicating whether that pixel is a true 'changed' pixel or not.
###Code
from os.path import join
DATA_DIR = 'My Drive/EarthEngine/ACD/IW/S2/S2_2'
# Number of records in dataset
SIZE = 4826575
BATCH = 10
NUMERIC_COLUMNS = ['cv_z', 'ndvi_z', 'nbr_z', 'ndwi_z', 'rcvmax_z', 'ndsi_z']
CATEGORY_COLUMNS = {'habitat':['shrub', 'forest', 'desert', 'grassland', 'wetland']}
FEATURE_COLUMNS = NUMERIC_COLUMNS + list(CATEGORY_COLUMNS.keys())
LABEL_COLUMN = 'disturbance'
LABELS = ['none', 'bare', 'residential', 'solar']
# LABEL_COLUMN = 'change'
# LABELS = [0, 1]
# factor for train/test split
FACTOR = 5
BUCKET = 'cvod-203614-mlengine'
PROJECT = 'ACD_methods'
PREDICT_DIR = 'data/predict'
LOG_DIR = 'drive/My Drive/Tensorflow/models/ACD_DNN'
def get_csv_dataset(file_path, batch, labels, features, **kwargs):
"""
Construct a tfrecord dataset from a list of csv files
Parameters:
file_path (list<str>): string or list of strings specifying input files
batch (int): batch size
labels (str): field name containing labels
features(list<str>): field name(s) containing feature values
Returns:
tf.data.Dataset
"""
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size= batch,
label_name= labels,
select_columns = features + [labels],
na_value="?",
# one epoch of data initially because otherwise splitting runs infinitely
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
# Specify the path to our csv data
csv_file_path = join(ROOT, DATA_DIR)
# Put all the csv files from our data directory into a list
rawData = tf.io.gfile.glob(csv_file_path + '/*.csv')
# Create a TFDataset
tfData = get_csv_dataset(rawData, BATCH, LABEL_COLUMN, FEATURE_COLUMNS)
# Inspect a batch of records
# # take creates a new dataset with n elements (batches)
# test = tfData.take(1)
# feats, labs = iter(test).next()
# # features are a dictionary of tensors
# print(feats)
def preprocess_record(features, labels):
"""
Process input data by converting categorical features to lowercase and labels to one-hot
Parameters:
features (list<str>): column names of features to be used for prediction
labels (str): column name containing labels
Returns:
tuple: dictionary of features and label tensor
"""
# convert labels to one_hot tensor
if labels.dtype.__eq__(tf.dtypes.string):
# manually convert strings to one-hot tensor
matches = tf.stack([tf.equal(labels, s) for s in LABELS], axis = -1)
labels = tf.cast(matches, tf.float32)
else:
labels = tf.one_hot(labels, len(LABELS))
# change all strings to lowercase
features = {key:tf.strings.lower(feature) if key in CATEGORY_COLUMNS.keys() else feature for key, feature in features.items()}
return features, labels
def train_test_split(dataset, factor):
"""
Divide a tf.data.Dataset into train and test splits
Parameters:
dataset (tf.data.Dataset): dataset with features and labels to split
factor (int): numerator for fraction of testing data (e.g. 5 = 1/5)
Returns:
tuple: two tf.data.Datasets
"""
def is_test(x, y):
return x % factor == 0
def is_train(x, y):
return not is_test(x, y)
def recover(x,y):
return y
dataset = dataset.enumerate()
test = dataset.filter(is_test).map(recover)
train = dataset.filter(is_train).map(recover)
return test, train
# process the data and shuffle once before splitting
tfData = tfData.map(preprocess_record).shuffle(SIZE, reshuffle_each_iteration = False)
# split into testing and training data
testData, trainData = train_test_split(tfData, FACTOR)
# Inspect our splits. These should be shuffled and batched
# take creates a new dataset with n elements (batches)
test = testData.take(1)
feats, labs = iter(test).next()
# features are a dictionary of tensors
print(labs)
print(feats['cv_z'].shape)
###Output
_____no_output_____
###Markdown
Create the Keras modelBefore we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columnscategorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details). Here we will use a simple neural network model with a 64 node hidden layer, a dropout layer and an output layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See [the Keras `Sequential` model guide](https://keras.io/getting-started/sequential-model-guide/) for more details.
###Code
# Create feature columns for feature data as part of graph
numericColumns = [tf.feature_column.numeric_column(ft) for ft in NUMERIC_COLUMNS]
categoricalColumns = [tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list(key, vocab)) for key, vocab in CATEGORY_COLUMNS.items() if key == 'habitat']
preprocessing_layer = tf.keras.layers.DenseFeatures(categoricalColumns + numericColumns)
print(preprocessing_layer)
from tensorflow import keras
# Define the layers in the model.
model = tf.keras.models.Sequential([
preprocessing_layer,
tf.keras.layers.Dense(64, input_shape = (7,), activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(len(LABELS), activation=tf.nn.softmax)
])
# Define metrics we want to monitor
fp = tf.keras.metrics.FalsePositives()
fn = tf.keras.metrics.FalseNegatives()
tensorboard = tf.keras.callbacks.TensorBoard(log_dir = LOG_DIR)
checkpoint = tf.keras.callbacks.ModelCheckpoint(
join(LOG_DIR,'best_weights.hdf5'),
monitor='val_accuracy',
verbose=1,
save_best_only=True,
mode='max'
)
# Compile the model with the specified loss function.
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy', fp, fn])
# Fit the model to the training data.
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(
# need to repeat our training data to cover multiple epochs
x = trainData.repeat(),
epochs = 5,
steps_per_epoch = (SIZE*(FACTOR-1)/FACTOR)/BATCH,
validation_data = testData,
validation_steps = (SIZE/FACTOR)/BATCH,
callbacks = [checkpoint, tensorboard]
)
# Fit the model to initialize weights so we can restore from checkpoint
model.fit(
# need to repeat our training data to cover multiple epochs
x = trainData.repeat(),
epochs = 1,
steps_per_epoch = 10
)
model.save(join(LOG_DIR, 'DNN_64.h5'))
#bring in the architecture and best weights from Drive
model = tf.keras.models.load_model(join(LOG_DIR, 'DNN_64.h5'))
# load pre-trained weights from best performing epoch
model.load_weights(join(LOG_DIR, 'best_weights.hdf5'))
#lets see where were at
# evalMetrics = model.evaluate(x=testData, steps = (SIZE/FACTOR)/BATCH, verbose = 1)
###Output
_____no_output_____
###Markdown
Check model accuracy on the test setNow that we have a trained model, we can evaluate it using the test dataset. To do that, read and prepare the test dataset in the same way as the training dataset. Here we specify a batch sie of 1 so that each example in the test set is used exactly once to compute model accuracy. For model steps, just specify a number larger than the test dataset size (ignore the warning). Prediction Now that we have a trained model, let's make some predictions on images from GEE. First, we need to export an image to make predictions on
###Code
# Run the IW algorithm to generate an image with change metrics
import analyze, dictionaries
nlcd = ee.Image('USGS/NLCD/NLCD2016')
# give this aoi a name
testId = 'HughesMillCA'
# TODO: split up analyze functions so we don't need these
# grab the relevant dictionary of lda coefficients
dictionary = dictionaries.forest
aoi = ee.Geometry.Polygon(
[[[-120.83127262496578, 39.10457008576222],
[-120.83127262496578, 39.06952752960459],
[-120.76518299483882, 39.06952752960459],
[-120.76518299483882, 39.10457008576222]]], None, False);
doi = '2017-07-01'
landcover = 'forest'
output = analyze.analyze_iw(
ee.Feature(aoi, {'mode':landcover}),
doi, dictionary, 0, testId)
iwImg = output[4]
# check the iwoutput on map
map = folium.Map(location=[39.08, -120.80])
map.add_ee_layer(iwImg, {'bands':['cv_z'], 'min':0, 'max': 50}, 'iwout')
map
###Output
_____no_output_____
###Markdown
Export the imageryYou can also export imagery using TFRecord format. Specifically, export whatever imagery you want to be classified by the trained model into the output Cloud Storage bucket.
###Code
def doExport(image, out_image_base, directory, region):
"""
Export a GEE image as TFRecord. Block until complete.
Parameters:
image (ee.Image): image to be exported
out_image_base (str): output image base filename
directory (str): google cloud directory for image export
region (ee.Geometry): bounding area
"""
# Specify patch and file dimensions.
imageExportFormatOptions = {
'patchDimensions': [256, 256],
'maxFileSize': 104857600,
'compressed': True
}
task = ee.batch.Export.image.toCloudStorage(
image = image,
description = out_image_base,
fileNamePrefix = join(directory, out_image_base),
bucket = BUCKET,
scale = 10,
fileFormat = 'TFRecord',
region = region,
formatOptions = imageExportFormatOptions,
)
task.start()
# Block until the task completes.
print('Running image export to Cloud Storage...')
import time
while task.active():
time.sleep(30)
# Error condition
if task.status()['state'] != 'COMPLETED':
print('Error with image export.')
else:
print('Image export completed.')
# Start the task.
doExport(iwImg, testId, join(PROJECT, PREDICT_DIR), aoi)
###Output
_____no_output_____
###Markdown
Use the trained model to classify an image from Earth EngineNow it's time to classify the image that was exported from Earth Engine. If the exported image is large, it will be split into multiple TFRecord files in its destination folder. There will also be a JSON sidecar file called "the mixer" that describes the format and georeferencing of the image. Here we will find the image files and the mixer file, getting some info out of the mixer that will be useful during model inference. Use `gsutil` to locate the files of interest in the output Cloud Storage bucket. Check to make sure your image export task finished before running the following.
###Code
# Get a list of all the files in the output bucket.
def get_pred_files(imageBase, inDir):
"""
Retrieve TFRecord image files and json mixer exported from GEE
Parameters:
imageBase (str): base filename for images to return
inDir (str): directory containing image and mixer files
Returns:
tuple: list of image filenames, json
"""
filesList = !gsutil ls {inDir}
print(filesList)
# Get only the files generated by the image export.
exportFilesList = [s for s in filesList if imageBase in s]
# Get the list of image files and the JSON mixer file.
imageFilesList = []
jsonFile = None
for f in exportFilesList:
if f.endswith('.tfrecord.gz'):
imageFilesList.append(f)
elif f.endswith('.json'):
jsonFile = f
# Make sure the files are in the right order.
imageFilesList.sort()
# pprint(imageFilesList)
# print(jsonFile)
import json
# Load the contents of the mixer file to a JSON object.
jsonText = !gsutil cat {jsonFile}
# Get a single string w/ newlines from the IPython.utils.text.SList
mixer = json.loads(jsonText.nlstr)
print(mixer)
return imageFilesList, mixer
###Output
_____no_output_____
###Markdown
Read the image files into a datasetYou can feed the list of files (`imageFilesList`) directly to the `TFRecordDataset` constructor to make a combined dataset on which to perform inference. The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.
###Code
import numpy as np
# Get relevant info from the JSON mixer file.
def make_pred_dataset(imageBase, inDir, features, habType):
"""
This needs to return a tuple <dictionary of prediction features, blank 'labels'>
"""
imageFiles, mixer = get_pred_files(imageBase, inDir)
patch_width = mixer['patchDimensions'][0]
patch_height = mixer['patchDimensions'][1]
patches = mixer['totalPatches']
patch_dimensions_flat = [patch_width * patch_height, 1]
# get the index of the habitat column
featList = features
hab = featList.index('habitat')
hab = featList.pop(hab)
print(hab)
# Note that the tensors are in the shape of a patch, one patch for each band.
imageColumns = [
tf.io.FixedLenFeature(shape=patch_dimensions_flat, dtype=tf.float32)
for k in featList
]
# Parsing dictionary.
imageFeaturesDict = dict(zip(featList, imageColumns))
# Note that you can make one dataset from many files by specifying a list.
imageDataset = tf.data.TFRecordDataset(imageFiles, compression_type='GZIP')
# Parsing function.
# Each element in the output is a dictionary of fixedlenfeatures of size (65536, 1)
# There will be as many elements as patches
def parse_image(example_proto):
parsed = tf.io.parse_single_example(example_proto, imageFeaturesDict)
# add habitat tensor to dictionary
parsed.update({'habitat':np.full(patch_dimensions_flat, habType)})
return parsed
# Parse the data into tensors, one long tensor per patch.
imageDataset = imageDataset.map(parse_image, num_parallel_calls=5)
# Break our long tensors into many little ones. Only necessary if we want to do calculations with the data(?)
# creates tensors for each feature of shape (1, )
imageDataset = imageDataset.flat_map(
# slice each tensor along first dimension
lambda x: tf.data.Dataset.from_tensor_slices(x)
)
# # Add additional features (NDVI).
# # imageDataset = imageDataset.map(
# # # Add NDVI to a feature that doesn't have a label.
# # lambda features: addNDVI(features, None)[0]
# # )
# Turn the dictionary in each record into a tuple with a dummy label.
# imageDataset = imageDataset.map(
# # The model expects a tuple of (dictionary, label).
# # lambda dataDict: (dataDict, )
# # add dimension with 'list' then transpose from (6, 1) to (1, 6)
# # this operation destroys the dictionary
# lambda dataDict: (tf.transpose(list(dataDict.values())), )
# )
# Turn each patch into a batch.
# This creates element tensors of shape (65536, 1, 6)
imageDataset = imageDataset.batch(patch_width * patch_height)
# imageDataset = imageDataset.batch(1)
return imageDataset, patches
###Output
_____no_output_____
###Markdown
Generate predictions for the image pixelsTo get predictions in each pixel, run the image dataset through the trained model using `model.predict()`. Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.
###Code
FEATURE_COLUMNS.append('habitat')
FEATURE_COLUMNS
# Run prediction in batches, with as many steps as there are patches.
def make_predictions(imgBase, inDir, model, features, habType):
imageDataset, patches = make_pred_dataset(imgBase, inDir, features, habType)
predictions = model.predict(imageDataset, steps = patches, verbose = 1)
return predictions
# Note that the predictions come as a numpy array. Check the first one.
predImgDir = join('gs://', BUCKET, PROJECT, PREDICT_DIR)
predictions = make_predictions(testId, predImgDir, model, FEATURE_COLUMNS, 'forest')
###Output
_____no_output_____
###Markdown
Write the predictions to a TFRecord fileNow that there's a list of class probabilities in `predictions`, it's time to write them back into a file, optionally including a class label which is simply the index of the maximum probability. We'll write directly from TensorFlow to a file in the output Cloud Storage bucket.Iterate over the list, compute class label and write the class and the probabilities in patches. Specifically, we need to write the pixels into the file as patches in the same order they came out. The records are written as serialized `tf.train.Example` protos. This might take a while.
###Code
outputImageFile = join('gs://', BUCKET, PROJECT, PREDICT_DIR, 'output', testId + '.TFRecord')
outputImageFile
PATCH_WIDTH = 256
PATCH_HEIGHT = 256
PATCHES = 2
# Don't run
# Instantiate the writer.
writer = tf.io.TFRecordWriter(outputImageFile)
# Every patch-worth of predictions we'll dump an example into the output
# file with a single feature that holds our predictions. Since our predictions
# are already in the order of the exported data, the patches we create here
# will also be in the right order.
patch = [[], [], [], [], []]
curPatch = 1
for prediction in predictions:
patch[0].append(tf.argmax(prediction, 0))
patch[1].append(prediction[0])
patch[2].append(prediction[1])
patch[3].append(prediction[2])
patch[4].append(prediction[3])
# Once we've seen a patches-worth of class_ids...
if (len(patch[0]) == PATCH_WIDTH * PATCH_HEIGHT):
print('Done with patch ' + str(curPatch) + ' of ' + str(PATCHES) + '...')
# Create an example
# TODO: use dict comprehension to make this generalizeable based on labels
example = tf.train.Example(
features=tf.train.Features(
feature={
'prediction': tf.train.Feature(
int64_list=tf.train.Int64List(
value=patch[0])),
'noneProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[1])),
'bareProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[2])),
'resProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[3])),
'solarProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[4]))
}
)
)
# Write the example to the file and clear our patch array so it's ready for
# another batch of class ids
writer.write(example.SerializeToString())
patch = [[], [], [], [], []]
curPatch += 1
writer.close()
###Output
_____no_output_____
###Markdown
Upload the classifications to an Earth Engine asset Verify the existence of the predictions fileAt this stage, there should be a predictions TFRecord file sitting in the output Cloud Storage bucket. Use the `gsutil` command to verify that the predictions image (and associated mixer JSON) exist and have non-zero size.
###Code
!gsutil ls -l {outputImageFile}
###Output
_____no_output_____
###Markdown
Upload the classified image to Earth EngineUpload the image to Earth Engine directly from the Cloud Storage bucket with the [`earthengine` command](https://developers.google.com/earth-engine/command_lineupload). Provide both the image TFRecord file and the JSON file as arguments to `earthengine upload`. (You can use the `nclinton` version for now.)
###Code
!gsutil ls gs://cvod-203614-mlengine/ACD_methods/data/predict/*.json
USER_NAME = 'defendersofwildlifeGIS'
outputAssetID = 'users/' + USER_NAME + '/' + testId
jsonFile = 'gs://cvod-203614-mlengine/ACD_methods/data/predict/HughesMillCA-mixer.json'
#@title Don't run
# Start the upload.
!earthengine upload image --asset_id={outputAssetID} {outputImageFile} {jsonFile}
###Output
_____no_output_____
###Markdown
Check the status of the asset ingestionYou can also use the Earth Engine API to check the status of your asset upload. It might take a while. The upload of the image is an asset ingestion task.
###Code
#@title Don't run
ee.batch.Task.list()
###Output
_____no_output_____
###Markdown
View the ingested assetDisplay the vector of class probabilities as an RGB image with colors corresponding to the probability of bare, vegetation, water in a pixel. Also display the winning class using the same color palette.
###Code
predictionsImage = ee.Image(outputAssetID)
predictionVis = {
'bands': 'prediction',
'min': 0,
'max': 2,
'palette': ['red', 'green', 'blue']
}
probabilityVis = {'bands': ['bareProb', 'vegProb', 'waterProb']}
predictionMapid = predictionsImage.getMapId(predictionVis)
probabilityMapid = predictionsImage.getMapId(probabilityVis)
map = folium.Map(location=[38., -122.5])
folium.TileLayer(
tiles=EE_TILES.format(**predictionMapid),
attr='Google Earth Engine',
overlay=True,
name='prediction',
).add_to(map)
folium.TileLayer(
tiles=EE_TILES.format(**probabilityMapid),
attr='Google Earth Engine',
overlay=True,
name='probability',
).add_to(map)
map.add_child(folium.LayerControl())
map
###Output
_____no_output_____
###Markdown
*DNN *
###Code
!pip install tensorflow-gpu
from google.colab import files
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
uploaded = files.upload()
import pandas as pd
df = pd.read_csv('20220222_153859_hlc.csv', skiprows=1)
df = df.rename(columns={"Latitude (Deg N)": 'lat', "Longitude (Deg W)": 'lng'}, errors="raise")
df.head(5)
###Output
_____no_output_____
###Markdown
###Code
# Delete the row if lat column has zero
df = df.loc[(df['lat'] != 0)]
###Output
_____no_output_____
###Markdown
###Code
df.head(5)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1:]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
window_size = 15
batch_size = 32
shuffle_buffer_size = 100
# training and validation
split_time = 3900
time_train = series_time[:split_time]
x_train_lat = series_lat[:split_time]
x_train_lng = series_lng[:split_time]
time_valid = series_time[split_time:]
x_valid_lat = series_lat[split_time:]
x_valid_lng = series_lng[split_time:]
print(f'total: {len(series_lat)}, x_train_lat:{len(x_train_lat)}, x_valid_lat:{len(x_valid_lat)}')
dataset = windowed_dataset(x_valid_lat,window_size, batch_size, shuffle_buffer_size)
[data for data in dataset]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-6, momentum=0.9))
model.fit(dataset,epochs=100,verbose=0)
forecast_lat = []
for time in range(len(series_lat) - window_size):
forecast_lat.append(model.predict(series_lat[time:time + window_size][np.newaxis]))
forecast_lat = forecast_lat[split_time - window_size:]
results_lat = np.array(forecast_lat)[:, 0, 0]
print([x for x in forecast_lat])
print(results_lat)
fig, ax = plt.subplots(1,1, figsize=(10, 5), tight_layout=True, dpi=100)
xaxis = [y for y in range(len(results_lat))]
ax.plot(xaxis, results_lat)
ax.plot(xaxis, x_valid_lat)
start = 8500
end = 13700
series_time = df.Time[start:end]
series_lat = df.lat[start:end]
series_lng = df.lng[start:end]
coordinates = [(i,j) for i,j in zip(series_lat, series_lng)]
coordinates= np.array(coordinates, dtype =np.float64)
print(type(coordinates))
print(coordinates)
# series_time = series_time.to_numpy()
series_lat = series_lat.to_numpy()
# series_lng = series_lng.to_numpy()
split_time = 3900
x_train = coordinates[:split_time]
x_valid = coordinates[split_time:]
x_valid = tf.data.Dataset.from_tensor_slices(x_valid)
print(f'total: {len(coordinates)}, x_train:{len(x_train)}, x_valid:{len(x_valid)}')
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
[data for data in dataset]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, input_dim=2, input_shape=(window_size,2), activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(2)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-6, momentum=0.9))
history = model.fit(dataset, epochs=100,verbose=0)
forecast_co = []
for time in range(len(coordinates) - window_size):
forecast_co.append(model.predict(coordinates[time:time + window_size][np.newaxis]))
forecast_co = forecast_co[split_time - window_size:]
# [y for y in forecast_co]
result = np.array(forecast_co)[:, 0, 0]
print(result)
df_result = pd.DataFrame(result, columns=('lat','lng'))
df.head(5)
df_coor = pd.DataFrame(x_valid, columns=('lat', 'lng'))
fig, ax = plt.subplots(1,1, figsize=(10, 5), tight_layout=True, dpi=100)
plt.plot(df_result['lng'], df_result['lat'], color = 'r')
# plt.plot(df_coor['lng'], df_coor['lat'], color = 'b')
plt.show()
###Output
_____no_output_____
###Markdown
Predicting Protein Secondary Structures with Convolutional Neural Networks (CNNs)This Notebook contains the data engineering, model creation and training of 2 CNNS.the first CNN takes protein sequences (as a numeric vector) as input to predict the secondary structure of an single amino acid, based on its environment. The network classifies the amino acid either as helix, beta-sheet or as loop. The trained network is then used to classify every amino acid of every sequence, which will serve as input for the second network.The second network will take the output from the first network an will reclassify the results based, now with added (predicted) secondary structure information of its neighbors.
###Code
# importing necessary modules and libraries
import pandas as pd
import numpy as np
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The Training Data1. First download the most recent collection of secondary structures based on the protein database from Kabsch and Sanders translation website:this is a terminal command which requires wget and time stamps the file you download from the site. wget https://cdn.rcsb.org/etl/kabschSander/ss.txt.gz -O ${DATE_STAMP}-ss.txt.gz2. The Python file generate_ss_files contains a generate_files function which takes this file and a directory name and puts each secondary structure into it's own file. This is so we can avoid having to due random access lookups in the ss file. All ss files are put into the directory dir3. The three generate_training files will create a training file as well as a file stating which files the trainingdata came from and at which location in the sequence.(comments are still lacking though)Some sequences in the data set contain strange amino acids, which I removed
###Code
# Function to load the secondary structure files from correspronding directory
ss_dir = "ss_dir"
def load_proteins(dir):
'''
go to secondary structure directory and load sequence and structure information into python.
arguments:
dir: directory
returns:
names, sequences, ss_structures
'''
names = []
sequences = []
ss_structures = []
for file in os.listdir(dir):
with open(os.path.join(dir, file), 'r') as f:
lines = f.readlines()
name = file.split('.')[0]
sequence = lines[0].replace('\n','')
ss_structure = lines[1].replace('\n','')
names.append(name)
sequences.append(sequence.replace(',',''))
ss_structures.append(ss_structure.replace(',',''))
return names, sequences, ss_structures
names, sequences, ss_structures = load_proteins(ss_dir)
# Defining the Dataframe which contains the name of the protein, the sequence and secondary structure information.
d = {"name":names,"sequence":sequences, "ss_structure":ss_structures}
raw_data = pd.DataFrame(d)
raw_data
raw_data.to_pickle('data/raw_data.pkl')
cleaned_data = raw_data # inserted from cell below for test
strange = ["B","U","O","X"," '","'","Z"]
strange_rows = []
for row in cleaned_data.iterrows():
#print(row[1][1])
for x in strange:
if x in row[1][1]: # row[1][1] == sequence
strange_rows.append(row[0])
print(f"dropping {len(strange_rows)} strange rows")
# cleaned data
cleaned_data = cleaned_data.drop(strange_rows)
cleaned_data.to_pickle('data/cleaned_data.pkl')
# transform amino acid data to numerical values
# used for changing amino acids to numerical values, so it can be used as input for NN
aas = {"A":0, "C":1, "D":2, "E":3, "F":4, "G":5, "H":6, "I":7, "K":8, "L":9, "M":10,
"N":11, "P":12, "Q":13, "R":14, "S":15, "T":16, "V":17, "W":18, "X":19, "Y":20}
numerical_sequences = []
for protein in cleaned_data.iterrows():
sequence = protein[1][1]
array=np.zeros((len(sequence),21))
for i,aa in enumerate(sequence):
array[i,aas[aa]] = 1
numerical_sequences.append(array)
print(len(numerical_sequences))
cat_labels = {'B':0, 'E':1, 'G':2, 'H':3, 'I':4, 'S':5, 'T':6, '-':7}
# the labels can simplified to helices, loops and sheets
# helix - E,B - E 0
# sheet - H,G,I - H 1
# loop - S,T - '-' 2
numerical_labels = []
for protein in cleaned_data.iterrows():
label = protein[1][2]
label=label.replace('B',"0").replace('E',"0").replace('G',"1").replace('H',"1").replace('I',"1").replace('S',"2").replace('T',"2").replace('-',"2")
numerical_labels.append(label)
print(len(numerical_labels))
# creating numerical data dataframe
cleaned_names = cleaned_data.name.to_list()
numerical_data = pd.DataFrame({
"name":cleaned_names,
"numeric_sequence":numerical_sequences,
"numeric_ss":numerical_labels
})
numerical_data
numerical_data.to_pickle('data/numerical_data.pkl')
###Output
_____no_output_____
###Markdown
Numeric Sequences The sequences were stored as a vector with the dimensions (sequence_length x 21). the rows of the vector correspond to the index of an amino acid in the sequence and the columns to a specific amino acid.$ \begin{pmatrix}0 & \dots & 0 & 1 \\\vdots & \ddots & & 0 \\\vdots & & \ddots & 1 \\0 & \dots & 1 & 0\end{pmatrix} $If the first amino acid of sequence is an alanine, then the vector at index (0,0) will be 1 and the rest of the row will be zero.There are 21 columns for the 20 amino acids and 1 pseudo amino acid, which I use for padding in when creating the data points From Numeric Sequences to Data PointsTo train the model I picked random amino acids from the sequences and added an environment of a fixed size. The randomly picked amino acid is classified and the environment just used for further information. If environment of the picked amino acid exceeds the sequence length, psuedo amino acids are added to generate a vector of the desired length.
###Code
environment = 10
def create_in_data(center,environment,full_seq_vector):
"""
Function to create datapoints for single numerical sequences.
params:
center (type int) :: index of amino acid, which will be classified
environment (type int) :: the size of the environment - left & right boarder - considered for classification.
full_seq_vector (type np.array) :: the full sequence considered for classification
returns:
returns output vector as np.array (environment*2+1 x 20)
"""
center_stack = full_seq_vector[center]
# determin if upstream padding is required
usp = False
if center - environment < 0:
usp = True
usp_length = environment - center
# determin if downstream padding is required
seq_length = full_seq_vector.shape[0]
dsp = False
if center + environment >= seq_length:
dsp = True
dsp_length = center + environment - seq_length + 1
# create upstream padding if usp
if usp:
us_padding = np.zeros((usp_length,21))
for i in range(usp_length):
us_padding[i,19] = 1
# create upstream padding if dsp
if dsp:
ds_padding = np.zeros((dsp_length,21))
for i in range(dsp_length):
ds_padding[i,19] = 1
# now construct the out_vector
if usp and dsp:
middle = full_seq_vector[0:seq_length]
out_vector = np.vstack([us_padding,middle,ds_padding])
return out_vector
if usp: # out has right dimensions
data = full_seq_vector[0:center+environment+1]
out_vector = np.vstack([us_padding,data])
return out_vector
if dsp: # out has right dimensions
start = center-environment
data = full_seq_vector[start:seq_length+1]
out_vector = np.vstack([data,ds_padding])
return out_vector
start = center - environment
end = center + environment + 1
out_vector = full_seq_vector[start:end]
return out_vector
def get_dataset(environment, df, name_col="name", input_col="numeric_sequence", label_col="numeric_ss"):
"""
creates input vector and label for each sequence in df DataFrame.
returns a dataframe with the columns: name, input_vector, label.
params:
environment (type int) :: size of environment considered for classification
df (type pd.DataFrame) :: dataframe consisting of numerical data - see numerical data
name_col (type string) :: name of column storing the names
input_col (type sting) :: name of column storing the numeric sequences
label_col (type string):: name of column storing the sequence labels
returns:
dataframe with datapoints - ready for training / testing
"""
names = df[name_col].to_list()
input_values = df[input_col].to_list()
label_values = df[label_col].to_list()
xs = []
ys =[]
for i in range(len(names)):
center = np.random.randint(0,len(input_values[i]))
x = create_in_data(center=center, environment=environment, full_seq_vector=input_values[i])
y = np.int8(label_values[i][center])
xs.append(x)
ys.append(y)
out_df = pd.DataFrame({"name":names, "x":xs, "y":ys})
return(out_df)
# divide data into training and testing sets
numerical_data.sample(frac=1)
d_points = 333400
training_set=numerical_data.iloc[:d_points]
testing_set=numerical_data.iloc[d_points+1:]
#
def make_train_and_test(ntrain,train_set,test_set,environment=environment):
'''
A.) Creates n data points per sequence in for the train_set - given by variable ntrain.
The data points are collected in a single training_data DataFrame and returned.
B.) Create 1 data point per sequence in test_set and return them as testing_data DataFrame.
params:
ntrain (type int) :: number of data points per sequence in train_set
train_set (type pd.DataFrame) :: training data in form of pd.DataFrame
test_set (type pd.DataFrame) :: testing data in form of pd.DataFrame
environment (type int) :: size of the environment -> needed for get_dataset()
returns:
training_data (type pd.DataFrame)
testing_data (type pd.DataFrame)
'''
training_sets =[]
for n in range(ntrain):
tr_set = get_dataset(environment=environment,df=train_set)
training_sets.append(tr_set)
training_data = pd.concat(training_sets)
testing_data = get_dataset(environment=environment,df=testing_set)
return training_data, testing_data
# make training and testing data
training_data, testing_data = make_train_and_test(ntrain=1,train_set=training_set, test_set=testing_set)
# transforming training and testing data to numpy arrays
train_sequences = np.array(training_data.x.to_list())
train_labels = np.array(training_data.y.to_list())
test_sequences = np.array(testing_data.x.to_list())
test_labels = np.array(testing_data.y.to_list())
# reshape
def reshape(array):
old_array_shape = array.shape
new_array = np.zeros(old_array_shape)
new_array = new_array.reshape(new_array.shape + (1,))
for i in range(len(array)):
new_array[i] = array[i].reshape(array[i].shape + (1,))
return new_array
train_sequences = reshape(train_sequences)
train_labels = reshape(train_labels)
test_sequences = reshape(test_sequences)
test_labels = reshape(test_labels)
train_sequences.shape
# Create checkpoints during training
checkpoint_path = "DNN/lvl1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
monitor='val_loss',
save_best_only=True,
save_weights_only=False,
mode='auto',
save_freq='epoch',
verbose=1)
def create_deep_model():
# Building the Convolutional Neural Network
input_size=(environment*2+1,21,1)
model = models.Sequential()
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size, padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu', input_shape=input_size))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.Flatten())
tf.keras.layers.Dropout(0.2)
model.add(layers.Dense(1600, activation='relu'))
model.add(layers.Dense(1600, activation='relu'))
model.add(layers.Dense(3, activation='softmax'))
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
DNN = create_deep_model()
DNN.summary()
# Training the model
history = DNN.fit(train_sequences, train_labels, epochs=20,
validation_data=(test_sequences, test_labels),
batch_size=512,
callbacks= [cp_callback])
# typical batch sizes 64, 128, 256 or 512.
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.savefig('DNN.png')
###Output
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
###Markdown
Read Input
###Code
X_0 = np.load('X0.npy')
X_1 = np.load('X1.npy')
X = np.append(X_0, X_1, axis=0)
y_1 = np.ones((X_1.shape[0]))
y_0 = np.zeros((X_0.shape[0]))
y = np.append(y_0, y_1, axis=0)
y = y.reshape(y.shape[0],1)
X_one_hot = (np.arange(X.max()) == X[...,None]-1).astype(int)
X_ = X_one_hot.reshape(X_one_hot.shape[0], X_one_hot.shape[1] * X_one_hot.shape[2])
X_ = X_one_hot.reshape(X_one_hot.shape[0], X_one_hot.shape[1], X_one_hot.shape[2],1)
###Output
_____no_output_____
###Markdown
Split Train, Test, Validation
###Code
X_train, X_test, y_train, y_test = train_test_split(X_, y , test_size=0.15, shuffle = True)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.15, shuffle = True)
###Output
_____no_output_____
###Markdown
MCC metric
###Code
def MCC(y_true, y_pred):
'''Calculates the Matthews correlation coefficient measure for quality
of binary classification problems.
'''
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos
tp = K.sum(y_pos * y_pred_pos)
tn = K.sum(y_neg * y_pred_neg)
fp = K.sum(y_neg * y_pred_pos)
fn = K.sum(y_pos * y_pred_neg)
numerator = (tp * tn - fp * fn)
denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
return numerator / (denominator + K.epsilon())
###Output
_____no_output_____
###Markdown
CONV2D
###Code
''' Create the model : a network with 3 convolutional layers and a dense layer '''
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.layers import Input, Convolution2D, MaxPooling2D, Flatten
from keras.models import Model
from keras.callbacks import ModelCheckpoint, EarlyStopping
from sklearn.utils import class_weight
num_classes =1
batch_size = 2014
num_epochs = 150
model = Sequential()
model.add(Conv2D(32, kernel_size=(4,4),activation='relu',input_shape=(299,4,1),padding='same'))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D((1, 2),padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (4, 4), activation='relu',padding='same'))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(1, 2),padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (4, 4), activation='relu',padding='same'))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(1, 2),padding='same'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(LeakyReLU(alpha=0.1))
model.add(Dense(num_classes, activation='sigmoid'))
checkpointer = ModelCheckpoint(filepath='weights.hdf5', monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='min', period=1)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
class_weight = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train.reshape(y_train.shape[0]))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy',MCC])
history = model.fit(X_train, y_train,
epochs=num_epochs,
batch_size=batch_size,
validation_data=(X_val, y_val), callbacks=[checkpointer,early], class_weight = class_weight)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
model.load_weights('weights.hdf5')
fashion_model.compile(loss='binary_crossentropy',
optimizer= 'adam', metrics=['accuracy',MCC])
score = model.evaluate(X_test, y_test, verbose=0)
score
np.save('score_CNN_TSS_Start_', score)
np.save('h_DA_MCC_train_', np.asarray(history.history['MCC']))
np.save('h_DA_MCC_val_', np.asarray(history.history['val_MCC']))
y_pred = fashion_model.predict(X_test)
np.save('y_pred_CNN_TSS_Start_', y_pred)
np.save('y_test_CNN_TSS_Start_', y_test)
roc_auc_score(y_test, y_pred)
average_precision_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
One Hot Encoding for Categorical labels
###Code
from keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
# Y1 = Y
Y1 = Y_sample
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(Y1)
Y1 = enc.transform(Y_sample).toarray()
print("Shape of Y post OneHot encoding", Y1.shape)
###Output
_____no_output_____
###Markdown
Running for 100 epochs
###Code
from keras.models import Sequential
from keras.layers import Dense,Dropout
from keras.optimizers import RMSprop
model = Sequential()
model.add(Dense(50, input_dim=X.shape[1],activation='relu'))
model.add(Dense(25, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(Y1.shape[1], activation='sigmoid'))
# Compile model
opt = RMSprop(lr=0.00001,decay=1e-6)
#model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['categorical_accuracy'])
model.summary()
# Fit the model
model.fit(X_sample, Y1, epochs=500, batch_size=60, shuffle=True)
# calculate predictions
_, accuracy = model.evaluate(X_sample,Y1)
print('Accuracy: %.2f' % (accuracy*100))
###Output
Using TensorFlow backend.
###Markdown
필기체 분류
###Code
# 기본 파라미터 설정
Nin = 784
Nh_l = [100, 50]
number_of_class = 10
Nout = number_of_class
from tensorflow.keras import layers, models
# 분류 DNN 모델 구현
class DNN(models.Sequential):
def __init__(self, Nin, Nh_l, Nout):
super().__init__()
self.add(layers.Dense(Nh_l[0], activation='relu', input_shape=(Nin,), name='Hidden-1'))
self.add(layers.Dense(Nh_l[1], activation='relu', name='Hidden-2'))
self.add(layers.Dense(Nout, activation='softmax'))
self.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
import numpy as np
from tensorflow.keras import datasets, utils
# 데이터 준비
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
y_train = utils.to_categorical(y_train)
y_test = utils.to_categorical(y_test)
L, W, H = X_train.shape
X_train = X_train.reshape(-1, W * H)
X_test = X_test.reshape(-1, W * H)
X_train = X_train / 255.0
X_test = X_test / 255.0
# 분류 DNN 학습 및 성능 평가
model = DNN(Nin, Nh_l, Nout)
history = model.fit(X_train, y_train, epochs=10, batch_size=100, validation_split=0.2)
performance_test = model.evaluate(X_test, y_test, batch_size=100)
print('Test Loss and Accuracy ->', performance_test)
###Output
Epoch 1/10
480/480 [==============================] - 5s 3ms/step - loss: 0.3595 - accuracy: 0.8996 - val_loss: 0.1747 - val_accuracy: 0.9500
Epoch 2/10
480/480 [==============================] - 1s 3ms/step - loss: 0.1480 - accuracy: 0.9564 - val_loss: 0.1319 - val_accuracy: 0.9625
Epoch 3/10
480/480 [==============================] - 1s 3ms/step - loss: 0.1057 - accuracy: 0.9686 - val_loss: 0.1115 - val_accuracy: 0.9662
Epoch 4/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0815 - accuracy: 0.9758 - val_loss: 0.1066 - val_accuracy: 0.9701
Epoch 5/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0653 - accuracy: 0.9809 - val_loss: 0.0979 - val_accuracy: 0.9694
Epoch 6/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0530 - accuracy: 0.9840 - val_loss: 0.0935 - val_accuracy: 0.9723
Epoch 7/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0437 - accuracy: 0.9869 - val_loss: 0.0949 - val_accuracy: 0.9722
Epoch 8/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0351 - accuracy: 0.9896 - val_loss: 0.1016 - val_accuracy: 0.9711
Epoch 9/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0296 - accuracy: 0.9913 - val_loss: 0.0966 - val_accuracy: 0.9726
Epoch 10/10
480/480 [==============================] - 1s 3ms/step - loss: 0.0243 - accuracy: 0.9926 - val_loss: 0.0968 - val_accuracy: 0.9746
100/100 [==============================] - 0s 2ms/step - loss: 0.0885 - accuracy: 0.9752
Test Loss and Accuracy -> [0.08849368244409561, 0.9751999974250793]
###Markdown
컬러 이미지 분류
###Code
# 데이터 불러오기
def Data_func():
(X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data()
Y_train = utils.to_categorical(y_train)
Y_test = utils.to_categorical(y_test)
L, W, H, C = X_train.shape
X_train = X_train.reshape(-1, W * H * C)
X_test = X_test.reshape(-1, W * H * C)
X_train = X_train / 255.0
X_test = X_test / 255.0
return (X_train, Y_train), (X_test, Y_test)
# DNN 모델링
class DNN(models.Sequential):
def __init__(self, Nin, Nh_l, Pd_l, Nout):
super().__init__()
self.add(layers.Dense(Nh_l[0], activation='relu', input_shape=(Nin,), name='Hidden-1'))
self.add(layers.Dropout(Pd_l[0]))
self.add(layers.Dense(Nh_l[1], activation='relu', input_shape=(Nin,), name='Hidden-2'))
self.add(layers.Dropout(Pd_l[1]))
self.add(layers.Dense(Nout, activation='softmax'))
self.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
%run 'drive/MyDrive/Colab Notebooks/Keras/skeras.ipynb'
import matplotlib.pyplot as plt
Nh_l = [100, 50]
Pd_l = [0.0, 0.0]
number_of_class = 10
Nout = number_of_class
(X_train, Y_train), (X_test, Y_test) = Data_func()
model = DNN(X_train.shape[1], Nh_l, Pd_l, Nout)
history = model.fit(X_train, Y_train, epochs=100, batch_size=100, validation_split=0.2)
performance_test = model.evaluate(X_test, Y_test, batch_size=100)
print('Test Loss and Accuracy ->', performance_test)
plot_acc(history)
plt.show()
plot_loss(history)
plt.show()
Nh_l = [100, 50]
Pd_l = [0.05, 0.5]
number_of_class = 10
Nout = number_of_class
(X_train, Y_train), (X_test, Y_test) = Data_func()
model = DNN(X_train.shape[1], Nh_l, Pd_l, Nout)
history = model.fit(X_train, Y_train, epochs=100, batch_size=100, validation_split=0.2)
performance_test = model.evaluate(X_test, Y_test, batch_size=100)
print('Test Loss and Accuracy ->', performance_test)
plot_acc(history)
plt.show()
plot_loss(history)
plt.show()
###Output
_____no_output_____
###Markdown
DNN
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import tensorflow as tf
# settings
LEARNING_RATE = 1e-4
# set to 20000 on local environment to get 0.99 accuracy
TRAINING_ITERATIONS = 2500
DROPOUT = 0.5
BATCH_SIZE = 50
# set to 0 to train on all available data
VALIDATION_SIZE = 2000
# image number to output
IMAGE_TO_DISPLAY = 10
###Output
_____no_output_____
###Markdown
load the data
###Code
# read training data from CSV file
data = pd.read_csv('dataset/train.csv')
print('data({0[0]},{0[1]})'.format(data.shape))
print (data.head())
###Output
data(42000,785)
label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 \
0 1 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 0 0
3 4 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 \
0 0 ... 0 0 0 0 0
1 0 ... 0 0 0 0 0
2 0 ... 0 0 0 0 0
3 0 ... 0 0 0 0 0
4 0 ... 0 0 0 0 0
pixel779 pixel780 pixel781 pixel782 pixel783
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
[5 rows x 785 columns]
###Markdown
normalize the data
###Code
images = data.iloc[:,1:].values
images = images.astype(np.float)
# convert from [0:255] => [0.0:1.0]
images = np.multiply(images, 1.0 / 255.0)
print('images({0[0]},{0[1]})'.format(images.shape))
###Output
images(42000,784)
###Markdown
images reshape
###Code
image_size = images.shape[1]
print ('image_size => {0}'.format(image_size))
# in this case all images are square
image_width = image_height = np.ceil(np.sqrt(image_size)).astype(np.uint8)
print ('image_width => {0}\nimage_height => {1}'.format(image_width,image_height))
###Output
image_size => 784
image_width => 28
image_height => 28
###Markdown
display one image
###Code
# display image
def display(img):
# (784) => (28,28)
one_image = img.reshape(image_width,image_height)
plt.axis('off')
plt.imshow(one_image, cmap=cm.binary)
# output image
display(images[IMAGE_TO_DISPLAY])
###Output
_____no_output_____
###Markdown
output the displayed image's lable
###Code
labels_flat = data.ix[:,0].values.ravel()
print('labels_flat({0})'.format(len(labels_flat)))
print ('labels_flat[{0}] => {1}'.format(IMAGE_TO_DISPLAY,labels_flat[IMAGE_TO_DISPLAY]))
###Output
labels_flat(42000)
labels_flat[10] => 8
###Markdown
count the label
###Code
labels_count = np.unique(labels_flat).shape[0]
print('labels_count => {0}'.format(labels_count))
###Output
labels_count => 10
###Markdown
encode the label to one-hot vectors
###Code
# convert class labels from scalars to one-hot vectors
# 0 => [1 0 0 0 0 0 0 0 0 0]
# 1 => [0 1 0 0 0 0 0 0 0 0]
# ...
# 9 => [0 0 0 0 0 0 0 0 0 1]
def dense_to_one_hot(labels_dense, num_classes):
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
labels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)
print('labels({0[0]},{0[1]})'.format(labels.shape))
print ('labels[{0}] => {1}'.format(IMAGE_TO_DISPLAY,labels[IMAGE_TO_DISPLAY]))
###Output
labels(42000,10)
labels[10] => [0 0 0 0 0 0 0 0 1 0]
###Markdown
split the data into training and validation
###Code
# split data into training & validation
validation_images = images[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]
train_images = images[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]
print('train_images({0[0]},{0[1]})'.format(train_images.shape))
print('validation_images({0[0]},{0[1]})'.format(validation_images.shape))
###Output
train_images(40000,784)
validation_images(2000,784)
###Markdown
initialize the weight and the bias
###Code
# weight initialization
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
###Output
_____no_output_____
###Markdown
initialize the convolution the pooling layer
###Code
# convolution
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# input & output of NN
# images
x = tf.placeholder('float', shape=[None, image_size])
# labels
y_ = tf.placeholder('float', shape=[None, labels_count])
###Output
_____no_output_____
###Markdown
first convolutional layer
###Code
# first convolutional layer
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
# (40000,784) => (40000,28,28,1)
image = tf.reshape(x, [-1,image_width , image_height,1])
#print (image.get_shape()) # =>(40000,28,28,1)
h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
#print (h_conv1.get_shape()) # => (40000, 28, 28, 32)
h_pool1 = max_pool_2x2(h_conv1)
#print (h_pool1.get_shape()) # => (40000, 14, 14, 32)
###Output
_____no_output_____
###Markdown
second convolutional layer
###Code
# second convolutional layer
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
#print (h_conv2.get_shape()) # => (40000, 14,14, 64)
h_pool2 = max_pool_2x2(h_conv2)
#print (h_pool2.get_shape()) # => (40000, 7, 7, 64)
###Output
_____no_output_____
###Markdown
densely connected layer
###Code
# densely connected layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
# (40000, 7, 7, 64) => (40000, 3136)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#print (h_fc1.get_shape()) # => (40000, 1024)
###Output
_____no_output_____
###Markdown
dropout
###Code
# dropout
keep_prob = tf.placeholder('float')
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
###Output
_____no_output_____
###Markdown
readout layer
###Code
# readout layer for deep net
W_fc2 = weight_variable([1024, labels_count])
b_fc2 = bias_variable([labels_count])
y = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
#print (y.get_shape()) # => (40000, 10)
# cost function
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
# optimisation function
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy)
# evaluation
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
###Output
_____no_output_____
###Markdown
defined the prediction function
###Code
# prediction function
#[0.1, 0.9, 0.2, 0.1, 0.1 0.3, 0.5, 0.1, 0.2, 0.3] => 1
predict = tf.argmax(y,1)
###Output
_____no_output_____
###Markdown
evaluate the model
###Code
epochs_completed = 0
index_in_epoch = 0
num_examples = train_images.shape[0]
# serve data by batches
def next_batch(batch_size):
global train_images
global train_labels
global index_in_epoch
global epochs_completed
start = index_in_epoch
index_in_epoch += batch_size
# when all trainig data have been already used, it is reorder randomly
if index_in_epoch > num_examples:
# finished epoch
epochs_completed += 1
# shuffle the data
perm = np.arange(num_examples)
np.random.shuffle(perm)
train_images = train_images[perm]
train_labels = train_labels[perm]
# start next epoch
start = 0
index_in_epoch = batch_size
assert batch_size <= num_examples
end = index_in_epoch
return train_images[start:end], train_labels[start:end]
# start TensorFlow session
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
###Output
_____no_output_____
###Markdown
data evaluation display
###Code
# visualisation variables
train_accuracies = []
validation_accuracies = []
x_range = []
display_step=1
for i in range(TRAINING_ITERATIONS):
#get new batch
batch_xs, batch_ys = next_batch(BATCH_SIZE)
# check progress on every 1st,2nd,...,10th,20th,...,100th... step
if i%display_step == 0 or (i+1) == TRAINING_ITERATIONS:
train_accuracy = accuracy.eval(feed_dict={x:batch_xs,
y_: batch_ys,
keep_prob: 1.0})
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={ x: validation_images[0:BATCH_SIZE],
y_: validation_labels[0:BATCH_SIZE],
keep_prob: 1.0})
print('training_accuracy / validation_accuracy => %.2f / %.2f for step %d'%(train_accuracy, validation_accuracy, i))
validation_accuracies.append(validation_accuracy)
else:
print('training_accuracy => %.4f for step %d'%(train_accuracy, i))
train_accuracies.append(train_accuracy)
x_range.append(i)
# increase display_step
if i%(display_step*10) == 0 and i:
display_step *= 10
# train on batch
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys, keep_prob: DROPOUT})
###Output
training_accuracy / validation_accuracy => 0.22 / 0.24 for step 0
training_accuracy / validation_accuracy => 0.30 / 0.28 for step 1
training_accuracy / validation_accuracy => 0.18 / 0.20 for step 2
training_accuracy / validation_accuracy => 0.10 / 0.14 for step 3
training_accuracy / validation_accuracy => 0.26 / 0.16 for step 4
training_accuracy / validation_accuracy => 0.24 / 0.24 for step 5
training_accuracy / validation_accuracy => 0.24 / 0.34 for step 6
training_accuracy / validation_accuracy => 0.32 / 0.40 for step 7
training_accuracy / validation_accuracy => 0.24 / 0.38 for step 8
training_accuracy / validation_accuracy => 0.24 / 0.40 for step 9
training_accuracy / validation_accuracy => 0.36 / 0.42 for step 10
training_accuracy / validation_accuracy => 0.48 / 0.52 for step 20
training_accuracy / validation_accuracy => 0.52 / 0.68 for step 30
training_accuracy / validation_accuracy => 0.70 / 0.80 for step 40
training_accuracy / validation_accuracy => 0.70 / 0.84 for step 50
training_accuracy / validation_accuracy => 0.82 / 0.90 for step 60
training_accuracy / validation_accuracy => 0.82 / 0.86 for step 70
training_accuracy / validation_accuracy => 0.90 / 0.90 for step 80
training_accuracy / validation_accuracy => 0.88 / 0.90 for step 90
training_accuracy / validation_accuracy => 0.88 / 0.92 for step 100
training_accuracy / validation_accuracy => 0.98 / 0.92 for step 200
training_accuracy / validation_accuracy => 0.94 / 0.92 for step 300
training_accuracy / validation_accuracy => 0.92 / 0.94 for step 400
training_accuracy / validation_accuracy => 1.00 / 0.96 for step 500
training_accuracy / validation_accuracy => 0.94 / 0.94 for step 600
training_accuracy / validation_accuracy => 0.92 / 0.92 for step 700
training_accuracy / validation_accuracy => 0.94 / 0.96 for step 800
training_accuracy / validation_accuracy => 0.90 / 0.96 for step 900
training_accuracy / validation_accuracy => 0.98 / 0.96 for step 1000
training_accuracy / validation_accuracy => 0.98 / 0.96 for step 2000
training_accuracy / validation_accuracy => 1.00 / 0.96 for step 2499
###Markdown
final accuracy of the validation set
###Code
# check final accuracy on validation set
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={x: validation_images,
y_: validation_labels,
keep_prob: 1.0})
print('validation_accuracy => %.4f'%validation_accuracy)
plt.plot(x_range, train_accuracies,'-b', label='Training')
plt.plot(x_range, validation_accuracies,'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 1.1, ymin = 0.7)
plt.ylabel('accuracy')
plt.xlabel('step')
plt.show()
sess.close()
###Output
_____no_output_____
###Markdown
change the activation function from relu to elu
###Code
# first convolutional layer
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
# (40000,784) => (40000,28,28,1)
image = tf.reshape(x, [-1,image_width , image_height,1])
#print (image.get_shape()) # =>(40000,28,28,1)
h_conv1 = tf.nn.elu(conv2d(image, W_conv1) + b_conv1)
#print (h_conv1.get_shape()) # => (40000, 28, 28, 32)
h_pool1 = max_pool_2x2(h_conv1)
#print (h_pool1.get_shape()) # => (40000, 14, 14, 32)
# second convolutional layer
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.elu(conv2d(h_pool1, W_conv2) + b_conv2)
#print (h_conv2.get_shape()) # => (40000, 14,14, 64)
h_pool2 = max_pool_2x2(h_conv2)
#print (h_pool2.get_shape()) # => (40000, 7, 7, 64)
# densely connected layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
# (40000, 7, 7, 64) => (40000, 3136)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.elu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#print (h_fc1.get_shape()) # => (40000, 1024)
# dropout
keep_prob = tf.placeholder('float')
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# readout layer for deep net
W_fc2 = weight_variable([1024, labels_count])
b_fc2 = bias_variable([labels_count])
y = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
# cost function
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
# optimisation function
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy)
# evaluation
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
# prediction function
#[0.1, 0.9, 0.2, 0.1, 0.1 0.3, 0.5, 0.1, 0.2, 0.3] => 1
predict = tf.argmax(y,1)
epochs_completed = 0
index_in_epoch = 0
num_examples = train_images.shape[0]
# serve data by batches
def next_batch(batch_size):
global train_images
global train_labels
global index_in_epoch
global epochs_completed
start = index_in_epoch
index_in_epoch += batch_size
# when all trainig data have been already used, it is reorder randomly
if index_in_epoch > num_examples:
# finished epoch
epochs_completed += 1
# shuffle the data
perm = np.arange(num_examples)
np.random.shuffle(perm)
train_images = train_images[perm]
train_labels = train_labels[perm]
# start next epoch
start = 0
index_in_epoch = batch_size
assert batch_size <= num_examples
end = index_in_epoch
return train_images[start:end], train_labels[start:end]
# start TensorFlow session
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
###Output
_____no_output_____
###Markdown
elu function
###Code
import time
t1=time.time()
# visualisation variables
train_accuracies = []
validation_accuracies = []
x_range = []
display_step=1
for i in range(TRAINING_ITERATIONS):
#get new batch
batch_xs, batch_ys = next_batch(BATCH_SIZE)
# check progress on every 1st,2nd,...,10th,20th,...,100th... step
if i%display_step == 0 or (i+1) == TRAINING_ITERATIONS:
train_accuracy = accuracy.eval(feed_dict={x:batch_xs,
y_: batch_ys,
keep_prob: 1.0})
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={ x: validation_images[0:BATCH_SIZE],
y_: validation_labels[0:BATCH_SIZE],
keep_prob: 1.0})
print('training_accuracy / validation_accuracy => %.2f / %.2f for step %d'%(train_accuracy, validation_accuracy, i))
validation_accuracies.append(validation_accuracy)
else:
print('training_accuracy => %.4f for step %d'%(train_accuracy, i))
train_accuracies.append(train_accuracy)
x_range.append(i)
# increase display_step
if i%(display_step*10) == 0 and i:
display_step *= 10
# train on batch
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys, keep_prob: DROPOUT})
t = time.time()-t1
print("running time is %g s" %(t))
# check final accuracy on validation set
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={x: validation_images,
y_: validation_labels,
keep_prob: 1.0})
print('validation_accuracy => %.4f'%validation_accuracy)
plt.plot(x_range, train_accuracies,'-b', label='Training')
plt.plot(x_range, validation_accuracies,'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 1.1, ymin = 0.7)
plt.ylabel('accuracy')
plt.xlabel('step')
plt.show()
###Output
validation_accuracy => 0.9805
###Markdown
EECE-571M Course Project Modulation Classification Using Neural Networks Submission by Akshay Viswakumar (32971665) DNN Based Solution 0. Acquire DataThis section is just to download the RadioML2016.10a Dataset in case it is not present in the current directory.
###Code
# Download the dataset from opensig
# Note: If the RML2016.10a.tar.bz2 file is in the same directory as this notebook, the file will not be downloaded again.
from pathlib import Path
dset = Path("RML2016.10a.tar.bz2")
# Check if the File Exists
if(not dset.is_file()):
import urllib.request
urllib.request.urlretrieve('http://opendata.deepsig.io/datasets/2016.10/RML2016.10a.tar.bz2', 'RML2016.10a.tar.bz2')
# Decompress the RML2016.10a.tar.bz2 file into RML2016.10a.tar file
# Note: If the RML2016.10a.tar file exists, then this operation is skipped.
import sys
import os
import bz2
tarfile = Path("RML2016.10a.tar")
# Check if the Tar File Exists
if(not tarfile.is_file()):
zipfile = bz2.BZ2File('./RML2016.10a.tar.bz2') # open the file
data = zipfile.read() # get the decompressed data
#write the .tar file
open('./RML2016.10a.tar', 'wb').write(data) # write a uncompressed file
# Extract the .tar file to get RML2016.10a_dict.pkl
# Note: If the RML2016.10a.tar file exists, then this operation is skipped.
import tarfile
pklFile = Path("RML2016.10a_dict.pkl")
# Check if the pkl File Exists
if(not pklFile.is_file()):
my_tar = tarfile.open('./RML2016.10a.tar')
my_tar.extractall('./') # specify which folder to extract to
my_tar.close()
###Output
_____no_output_____
###Markdown
1. Extract the Pickle File and Load Dataset
###Code
# Extract the pickle file
import pickle
import numpy as np
Xd = pickle.load(open("RML2016.10a_dict.pkl",'rb'),encoding="bytes")
snrs,mods = map(lambda j: sorted(list(set(map(lambda x: x[j], Xd.keys())))), [1,0])
X = []
lbl = []
for mod in mods:
for snr in snrs:
X.append(Xd[(mod,snr)])
for i in range(Xd[(mod,snr)].shape[0]): lbl.append((mod,snr))
X = np.vstack(X)
###Output
_____no_output_____
###Markdown
2. Import Required Packages
###Code
# Import Necessary Packages
%matplotlib inline
import os
import random
import tensorflow.keras.utils
import tensorflow.keras.models as models
from tensorflow.keras.layers import Reshape,Dense,Dropout,Activation,Flatten
from tensorflow.keras.layers import GaussianNoise
from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D, BatchNormalization, LayerNormalization
from tensorflow.keras.regularizers import *
from tensorflow.keras.optimizers import *
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow.keras
import numpy as np
###Output
_____no_output_____
###Markdown
3. Data Pre-ProcessingSplit data into a training, testing and validation sets.
###Code
random.seed(777) # To ensure that the dataset is split in a deterministic way.
np.random.seed(777) # To ensure that the dataset is split in a deterministic way.
# This section of the code shuffles and splits the into Training, Testing and Validation Sets.
index = np.arange(0,220000)
random.shuffle(index)
trainIdx = index[0:110000]
testIdx = index[110000:220000]
trainX = X[trainIdx]
# Create Validation Data Set
indexVal = np.arange(0,110000)
random.shuffle(indexVal)
realTrainIdx = indexVal[0:99000]
valIdx = indexVal[99000:110000]
# Training Data
realTrainX = trainX[realTrainIdx]
X_trainDNN = realTrainX # For DNN
X_trainCNN = np.expand_dims(realTrainX, axis=-1) # For CNN
# Validation Data
validX = trainX[valIdx]
X_validDNN = validX # For DNN
X_validCNN = np.expand_dims(validX, axis=-1) # For CNN
# Actual Testing Data
testX = X[testIdx]
X_testDNN = testX # For DNN
X_testCNN = np.expand_dims(testX, axis=-1) # For CNN
# This Section of the code Prepapres labels Using One-Hot Encoding
# One Hot Encode Labels
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(np.asarray(lbl)[:,0])
print(lb.classes_)
lbl_encoded=lb.transform(np.asarray(lbl)[:,0])
ytrain=lbl_encoded[trainIdx]
# Labels for Training Data
y_train = ytrain[realTrainIdx]
# Labels for Validation Data
y_valid = ytrain[valIdx]
# Labels for Testing Data
y_test=lbl_encoded[testIdx]
###Output
[b'8PSK' b'AM-DSB' b'AM-SSB' b'BPSK' b'CPFSK' b'GFSK' b'PAM4' b'QAM16'
b'QAM64' b'QPSK' b'WBFM']
###Markdown
4. DNN Based SolutionSection 4 will deal with the following - 4.1. Prepare the feature vector - 4.2. Construct the DNN Model Using Keras- 4.3. Train the model.- 4.4. Test trained model on Test Dataset- 4.5. Visualize Results 4.1 Preparing the Feature Vector
###Code
# Helper Methods for Feature Extraction
# Function To Compute Raw-Moment of Data
def rawMoment(data,n):
# Calculate the nth Raw Moment of The Data
dataRaised = np.power(data,n)
nthMoment = np.array([np.mean(dataRaised,axis=1)])
return nthMoment.T
# Function To Compute (x+y)th Order Moment
def highOrdMoment(data,x,y):
complexData = data[:,0,:]+(1j*data[:,1,:]) # Data In Complex Form
complexDataConj = np.conj(complexData) # Complex Conjugate
finDat = np.power(complexData,x-y)*np.power(complexDataConj,y)
finDatMean = np.array([np.mean(finDat,axis=1)]).T
return finDatMean
# Feature Extraction Methods
# Feature 1: Cumulant C20
def getC20(data):
m20 = highOrdMoment(data,2,0)
return np.abs(m20)
# Feature 2: Cumulant C21
def getC21(data):
m21 = highOrdMoment(data,2,1)
return np.abs(m21)
# Feature 3: Cumulant C40
def getC40(data):
m40 = highOrdMoment(data,4,0)
m20 = highOrdMoment(data,2,0)
c40 = m40 - (3*np.square(m20))
return np.abs(c40)
# Feature 4: Cumulant C41
def getC41(data):
m41 = highOrdMoment(data,4,1)
m21 = highOrdMoment(data,2,1)
m20 = highOrdMoment(data,2,0)
c41 = m41 - (3*m20*m21)
return np.abs(c41)
# Feature 5: Cumulant C42
def getC42(data):
m42 = highOrdMoment(data,4,2)
m21 = highOrdMoment(data,4,2)
m20 = highOrdMoment(data,2,0)
c42 = m42 - np.square(m20) - (2*np.square(m21))
# Norm Code
c21 = getC21(data)
c42Norm = c42 / np.square(c21)
return np.abs(c42Norm)
# Feature 6: Cumulant C63
def getC63(data):
m63 = highOrdMoment(data,6,3)
m20 = highOrdMoment(data,2,0)
m21 = highOrdMoment(data,2,1)
m22 = highOrdMoment(data,2,2)
m40 = highOrdMoment(data,4,0)
m41 = highOrdMoment(data,4,1)
m42 = highOrdMoment(data,4,2)
t1 = m63 - (9*m21*m42) + (12*np.power(m21,3))
t2 = (-6*m20*m40) + (18*np.square(m20)*m21)
c63 = t1+t2
return np.abs(c63)
# Feature 7: Cumulant C80
def getC80(data):
m80 = highOrdMoment(data,8,0)
m60 = highOrdMoment(data,6,0)
m40 = highOrdMoment(data,4,0)
m20 = highOrdMoment(data,2,0)
t1 = m80 - (35*np.square(m40))
t2 = (-28*m60*m20) + (420*m40)
t3 = (-630*np.power(m20,4))
c80 = t1+t2+t3
return np.abs(c80)
# Feature 8: Kurtosis
def getKurtosis(data):
complexData = data[:,0,:]+(1j*data[:,1,:]) # Data In Complex Form
meanComplexData = np.array([np.mean(complexData,axis=1)]).T
# Find fourth central moment
fourthPower = np.power(complexData - meanComplexData,4)
centralMoment4 = (np.array([np.sum(fourthPower,axis=1)]).T)/fourthPower.shape[1]
# Variance
var = np.array([np.var(complexData,axis=1)]).T
kurt = np.abs(centralMoment4)/(np.square(var))
return kurt
# Feature 9: Skewness
def getSkewness(data):
complexData = data[:,0,:]+(1j*data[:,1,:]) # Data In Complex Form
meanComplexData = np.array([np.mean(complexData,axis=1)]).T
# Find third central moment
thirdPower = np.power(complexData - meanComplexData,3)
centralMoment3 = (np.array([np.sum(thirdPower,axis=1)]).T)/thirdPower.shape[1]
# Standard Deviation
std = np.array([np.std(complexData,axis=1)]).T
skew = np.abs(centralMoment3)/(np.power(std,3))
return skew
# Feature 10: Peak to RMS Power Ratio
def getPR(data):
complexData = data[:,0,:]+(1j*data[:,1,:]) # Data In Complex Form
absSquare = np.square(np.abs(complexData))
absSquareMax = np.array([np.max(absSquare,axis=1)]).T
# Calculate RMS (without Root)
rms = np.array([np.mean(absSquare,axis=1)]).T
# Calculate PR
PR = absSquareMax/rms
return PR
# Feature 11: Peak to Average Power Ratio
def getPA(data):
complexData = data[:,0,:]+(1j*data[:,1,:]) # Data In Complex Form
absData = np.abs(complexData)
absMax = np.array([np.max(absData,axis=1)]).T
# Calculate Mean
meanData = np.array([np.mean(absData,axis=1)]).T
# Calculate PA
PA = absMax / meanData
return PA
# Function to Compute Features and Compile Feature Vector
def createIPVector(data):
cumulantC20 = getC20(data)
cumulantC21 = getC21(data)
cumulantC40 = getC40(data)
cumulantC41 = getC41(data)
cumulantC42 = getC42(data)
cumulantC63 = getC63(data)
cumulantC80 = getC80(data)
kurtosis = getKurtosis(data)
skewness = getSkewness(data)
pr = getPR(data)
pa = getPA(data)
# Concat
xtrainIP = np.concatenate((cumulantC20,cumulantC21,cumulantC40,cumulantC41,cumulantC42,cumulantC63,cumulantC80,kurtosis,skewness,pr,pa),axis=1)
return xtrainIP
###Output
_____no_output_____
###Markdown
Feature Vectors for DNN Are Created Here
###Code
# Generate Input Feature Vector
# Generate Training, Validation and Testing Input Vectors
xtrainIP = createIPVector(realTrainX)
xvalidIP = createIPVector(validX)
xtestIP = createIPVector(testX)
###Output
_____no_output_____
###Markdown
4.2 Construct the DNN Model Using Keras
###Code
# Network Design Parameters
# Structure
numInput = 11 # Number of Input Nodes
numHid1 = 4096 # Number of Nodes in the First Hidden Layer
numHid2 = 2048 # Number of Nodes in the Second Hidden Layer
numHid3 = 1024 # Number of Nodes in the Third Hidden Layer
numOutput = 11 # Number of Output Nodes
# Activation Functions
activationHidden = 'relu'
activationOutput = 'softmax'
# Loss Function
lossFunction = 'categorical_crossentropy'
# Learning Algorithm
netOptimizer = 'adam'
# Callbacks - Implements Early Stopping and Weight Backup
callbackList = [
tensorflow.keras.callbacks.ModelCheckpoint('DNN-AV-Weights_best.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),
tensorflow.keras.callbacks.EarlyStopping(monitor='val_loss', patience=7, verbose=0, mode='auto')]
# Construct Network
modelDNN = models.Sequential()
modelDNN.add(Dense(numHid1,input_shape=(numInput,), activation=activationHidden, name='Hidden_Layer_1'))
modelDNN.add(Dropout(0.5))
modelDNN.add(Dense(numHid2, activation=activationHidden, name='Hidden_Layer_2'))
modelDNN.add(Dropout(0.5))
modelDNN.add(Dense(numHid3, activation=activationHidden, name='Hidden_Layer_3'))
modelDNN.add(Dropout(0.5))
modelDNN.add(Dense(numOutput, activation=activationOutput, name='Output_Layer'))
modelDNN.compile(loss=lossFunction, optimizer=netOptimizer,metrics=['categorical_accuracy'])
modelDNN.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Hidden_Layer_1 (Dense) (None, 4096) 49152
_________________________________________________________________
dropout (Dropout) (None, 4096) 0
_________________________________________________________________
Hidden_Layer_2 (Dense) (None, 2048) 8390656
_________________________________________________________________
dropout_1 (Dropout) (None, 2048) 0
_________________________________________________________________
Hidden_Layer_3 (Dense) (None, 1024) 2098176
_________________________________________________________________
dropout_2 (Dropout) (None, 1024) 0
_________________________________________________________________
Output_Layer (Dense) (None, 11) 11275
=================================================================
Total params: 10,549,259
Trainable params: 10,549,259
Non-trainable params: 0
_________________________________________________________________
###Markdown
4.3 Train DNN Model
###Code
# Train Model
historyDNN = modelDNN.fit(xtrainIP, y_train, batch_size=1000, epochs=100, verbose=2, validation_data=(xvalidIP,y_valid),callbacks=callbackList)
# Backupistory for Plotting Outputs
np_loss_history = np.array(historyDNN.history["loss"])
np.save('DNN-lossHist.npy',np_loss_history)
np_accu_history = np.array(historyDNN.history["categorical_accuracy"])
np.save('DNN-accuHist.npy',np_accu_history)
np_val_loss_history = np.array(historyDNN.history["val_loss"])
np.save('DNN-valLossHist.npy',np_val_loss_history)
np_val_accu_history = np.array(historyDNN.history["val_categorical_accuracy"])
np.save('DNN-valAccuHist.npy',np_val_accu_history)
###Output
_____no_output_____
###Markdown
Code Block Below Plots the Loss and Accuracy Curves For Training and Validation Sets
###Code
# Plots The Loss and Acuracy Curves from Testing
# Load Recently Backed-Up Details of History
lHistDNN = np.load('DNN-lossHist.npy')
aHistDNN = np.load('DNN-accuHist.npy')
vLHistDNN = np.load('DNN-valLossHist.npy')
vAHistDNN = np.load('DNN-valAccuHist.npy')
# Show loss curves
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2,figsize=(17, 6))
ax1.set_title('DNN-AV - Loss Across Epochs')
ax1.plot(lHistDNN, label='Training Loss')
ax1.plot(vLHistDNN, label='Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_xticks(np.arange(0,len(lHistDNN)+1,5), minor=False)
ax1.set_xticklabels(np.arange(1,len(lHistDNN)+1,5), fontdict=None, minor=False)
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()
ax2.set_title('DNN-AV - Performance Metric')
ax2.plot(aHistDNN, label='Training Accuracy')
ax2.plot(vAHistDNN, label='Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_xticks(np.arange(0,len(lHistDNN),5), minor=False)
ax2.set_xticklabels(np.arange(0,len(lHistDNN),5), fontdict=None, minor=False)
ax2.set_ylabel('Classification Accuracy')
ax2.grid()
ax2.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
4.4 Test DNN Model on Testing DatasetDuring training, the weights yielding the least validation loss have been stored. This is reloaded into the model before we evaluate the network performance using the test dataset.
###Code
# Re-load Best Weights from Training
modelDNN.load_weights('DNN-AV-Weights_best.h5')
# Evaluate Test Dataset Using Trained DNN Model
modelDNN.evaluate(xtestIP,y_test)
###Output
110000/110000 [==============================] - 24s 216us/sample - loss: 1.5656 - categorical_accuracy: 0.4256
###Markdown
4.5 Visualize Results
###Code
# Helper Functions to Plot Confusion Matrix
# Function to Extract Test Data of Specific SNR
def extractTest(data,labels,labelsEncoded,testIndex,snr):
testData = data[testIndex]
labelArray = np.array([labels])
testLabels = labelArray[:,testIdx,:]
testLabelsEncoded = labelsEncoded[testIdx]
idxOP = list()
# Loop Through Label Array To Get Index of Specific SNR
for i in range(0,testLabels.shape[1]):
if testLabels[0,i,1].decode('ascii')==snr:
idxOP.append(i)
# Return Subset of Test Data and Corresponding Labels
opTestData = testData[idxOP,:,:]
opTestLabel = testLabelsEncoded[idxOP]
return opTestData, opTestLabel
def plot_confusion_matrix(cm, titleAdd, title='DNN-AV - Confusion matrix', cmap=plt.cm.Blues, labels=[]):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title+titleAdd)
plt.colorbar()
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels, rotation=45)
plt.yticks(tick_marks, labels)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Confusion Matrix Function
def prepConfMat(testData,testLabel,predTestLabel,mods,title):
modString = list()
for i in range(0,len(mods)):
modString.append(mods[i].decode('ascii'))
conf = np.zeros([len(mods),len(mods)])
confnorm = np.zeros([len(mods),len(mods)])
for i in range(0,testData.shape[0]):
j = list(testLabel[i,:]).index(1)
k = int(np.argmax(predTestLabel[i,:]))
conf[j,k] = conf[j,k] + 1
for i in range(0,len(mods)):
confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])
plot_confusion_matrix(confnorm, title, labels=modString)
###Output
_____no_output_____
###Markdown
Plot Confusion Matrix for All SNRs
###Code
# Plot confusion matrix
test_Y_hatDNN = modelDNN.predict(xtestIP, batch_size=1024)
prepConfMat(xtestIP,y_test,test_Y_hatDNN,mods,' (All SNRs)')
###Output
_____no_output_____
###Markdown
Plot Confusion Matrix for Specific SNRs
###Code
# SNR Value Can be Changed to View Confusion Matrix at another SNR
snr = '-20'
title = ' (SNR = '+snr+')'
x_testSNR, y_TestSNR = extractTest(X,lbl,lbl_encoded,testIdx,snr)
xtestSNRFeat = createIPVector(x_testSNR)
y_hat_snr = modelDNN.predict(xtestSNRFeat, batch_size=1024)
prepConfMat(xtestSNRFeat,y_TestSNR,y_hat_snr,mods,title)
# SNR Value Can be Changed to View Confusion Matrix at another SNR
snr = '-8'
title = ' (SNR = '+snr+')'
x_testSNR, y_TestSNR = extractTest(X,lbl,lbl_encoded,testIdx,snr)
xtestSNRFeat = createIPVector(x_testSNR)
y_hat_snr = modelDNN.predict(xtestSNRFeat, batch_size=1024)
prepConfMat(xtestSNRFeat,y_TestSNR,y_hat_snr,mods,title)
# SNR Value Can be Changed to View Confusion Matrix at another SNR
snr = '0'
title = ' (SNR = '+snr+')'
x_testSNR, y_TestSNR = extractTest(X,lbl,lbl_encoded,testIdx,snr)
xtestSNRFeat = createIPVector(x_testSNR)
y_hat_snr = modelDNN.predict(xtestSNRFeat, batch_size=1024)
prepConfMat(xtestSNRFeat,y_TestSNR,y_hat_snr,mods,title)
# SNR Value Can be Changed to View Confusion Matrix at another SNR
snr = '18'
title = ' (SNR = '+snr+')'
x_testSNR, y_TestSNR = extractTest(X,lbl,lbl_encoded,testIdx,snr)
xtestSNRFeat = createIPVector(x_testSNR)
y_hat_snr = modelDNN.predict(xtestSNRFeat, batch_size=1024)
prepConfMat(xtestSNRFeat,y_TestSNR,y_hat_snr,mods,title)
###Output
_____no_output_____
###Markdown
Print and Plot Average Accuracy Across All Classes For Each SNR
###Code
# Get the test accuracy for different SNRs
acc = {}
acc_array=[]
snr_array=np.asarray(lbl)[:,1]
lb_temp = preprocessing.LabelBinarizer()
lb_temp.fit(snr_array)
temp_array=lb_temp.classes_
snr_label_array = []
snr_label_array.append(temp_array[6])
snr_label_array.append(temp_array[4])
snr_label_array.append(temp_array[3])
snr_label_array.append(temp_array[2])
snr_label_array.append(temp_array[1])
snr_label_array.append(temp_array[0])
snr_label_array.append(temp_array[9])
snr_label_array.append(temp_array[8])
snr_label_array.append(temp_array[7])
snr_label_array.append(temp_array[5])
snr_label_array.append(temp_array[10])
snr_label_array.append(temp_array[16])
snr_label_array.append(temp_array[17])
snr_label_array.append(temp_array[18])
snr_label_array.append(temp_array[19])
snr_label_array.append(temp_array[11])
snr_label_array.append(temp_array[12])
snr_label_array.append(temp_array[13])
snr_label_array.append(temp_array[14])
snr_label_array.append(temp_array[15])
y_test_snr=snr_array[testIdx]
for snr in snr_label_array:
test_X_i_temp = testX[np.where(y_test_snr==snr)]
test_X_i = createIPVector(test_X_i_temp)
test_Y_i = y_test[np.where(y_test_snr==snr)]
test_Y_i_hat = modelDNN.predict(test_X_i)
conf = np.zeros([len(mods),len(mods)])
confnorm = np.zeros([len(mods),len(mods)])
for i in range(0,test_X_i.shape[0]):
j = list(test_Y_i[i,:]).index(1)
k = int(np.argmax(test_Y_i_hat[i,:]))
conf[j,k] = conf[j,k] + 1
for i in range(0,len(mods)):
confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])
cor = np.sum(np.diag(conf))
ncor = np.sum(conf) - cor
print("Overall Accuracy: ", cor / (cor+ncor),"for SNR",snr)
acc[snr] = 1.0*cor/(cor+ncor)
acc_array.append(1.0*cor/(cor+ncor))
print("Random Guess Accuracy:",1/11)
# Show loss curves
plt.figure(figsize=(8, 6))
plt.title('DNN-AV - Accuracy Accross all SNRs')
plt.plot(np.arange(-20,20,2), acc_array,marker='.',markersize=12)
plt.xlabel('SNR')
plt.xticks(np.arange(-20,20,2))
plt.ylabel('Classification Accuracy')
plt.grid()
plt.show()
###Output
Overall Accuracy: 0.09389079113353757 for SNR b'-20'
Overall Accuracy: 0.09277414669571532 for SNR b'-18'
Overall Accuracy: 0.08871410279860983 for SNR b'-16'
Overall Accuracy: 0.09218181818181818 for SNR b'-14'
Overall Accuracy: 0.09027153389678115 for SNR b'-12'
Overall Accuracy: 0.10211202938475666 for SNR b'-10'
Overall Accuracy: 0.12484076433121019 for SNR b'-8'
Overall Accuracy: 0.16205035971223022 for SNR b'-6'
Overall Accuracy: 0.21764598206113858 for SNR b'-4'
Overall Accuracy: 0.3380671338989303 for SNR b'-2'
Overall Accuracy: 0.48577089337175794 for SNR b'0'
Overall Accuracy: 0.5942028985507246 for SNR b'2'
Overall Accuracy: 0.6728737690241718 for SNR b'4'
Overall Accuracy: 0.7283528352835283 for SNR b'6'
Overall Accuracy: 0.7553903345724907 for SNR b'8'
Overall Accuracy: 0.7650678733031674 for SNR b'10'
Overall Accuracy: 0.7731921110299489 for SNR b'12'
Overall Accuracy: 0.7797356828193832 for SNR b'14'
Overall Accuracy: 0.7877409967260822 for SNR b'16'
Overall Accuracy: 0.77068345323741 for SNR b'18'
Random Guess Accuracy: 0.09090909090909091
###Markdown
Imports
###Code
import numpy as np
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Activation functions
###Code
def relu(Z):
"""
Computes ReLU (Rectified Lenear Unit) activation on Z.
Parameters:
Z (<numpy.ndarray>)
Returns:
A (<numpy.ndarray>): Z passed to the relu
cache (<numpy.ndarray>): input (for backward propagation)
"""
A = np.maximum(0, Z)
cache = Z
return (A, cache)
def sigmoid(Z):
"""
Computes sigmoid activation on Z.
Parameters:
Z (<numpy.ndarray>)
Returns:
A (<numpy.ndarray>): Z passed to the relu
cache (<numpy.ndarray>): input (for backward propagation)
"""
A = 1 / (1 + np.exp(-Z))
cache = Z
return (A, cache)
def tanh(Z):
"""
Computes tanh activation on Z.
Parameters:
Z (<numpy.ndarray>)
Returns:
A (<numpy.ndarray>): Z passed to tanh
cache (<numpy.ndarray>): input (for backward propagation)
"""
A = np.tanh(Z)
cache = Z
return (A, cache)
def dummy(Z):
"""
Dummy activation function that returns the same input
Parameters:
Z (<numpy.ndarray>)
Returns:
A (<numpy.ndarray>): Z same as it is
cache (<numpy.ndarray>): input (for backward propagation)
"""
A = Z
cache = Z
return (A, cache)
###Output
_____no_output_____
###Markdown
Weights Initialization add diff initsHe, Xavier, random, zeros?https://datascience-enthusiast.com/DL/Improving-DeepNeural-Networks-Initialization.html
###Code
def init_params(layers):
"""
Initializes the weights for the (deep) neural network layers using Xavier's Initialization.
Parameters:
layers (tuple): tuple of layers' number of nodes and activation layers(including input layer) tuples
((10, ''), (5, 'relu'), (1, 'sigmoid'))
Returns:
params (dict): dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"""
activation_func = {'relu': relu,
'sigmoid': sigmoid,
'tanh': tanh,
'dummy': dummy}
params = {}
layer_dims, activations = zip(*layers)
nlayers = len(layer_dims)
for l in range(1, nlayers):
params[f"W{l}"] = np.random.rand(layer_dims[l], layer_dims[l-1]) \
* np.sqrt(1.0/(layer_dims[l]+layer_dims[l-1]))
# params[f"W{l}"] = np.random.rand(layer_dims[l], layer_dims[l-1]) * 0.01
# params[f"W{l}"] = np.random.randn(layer_dims[l], layer_dims[l-1]) \
# * np.sqrt(2/(layer_dims[l]+layer_dims[l-1]))
# params[f"W{l}"] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1]) #*0.01
params[f"b{l}"] = np.zeros((layer_dims[l], 1))
params[f"A{l}"] = activation_func[activations[l]]
return params
###Output
_____no_output_____
###Markdown
Forward Propagation
###Code
def forward_propagate_layer(A_prev, W, b, activate_func):
"""
Applies forward propagation (linear & activation).
Parameters:
A_prev (<numpy.ndarray>): this layer's input (last layer's output)
params (dict): dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
Returns:
A (<numpy.ndarray>): layer output (post-activation)
cache (tuple): forward propagation caches for backward
(linear_cache, (activation_cache, activation_name))
"""
# print(f"A_prev: ({A_prev.shape})")
# print(f"W: ({W.shape})")
# print(f"b: ({b.shape})")
Z = W @ A_prev + b
linear_cache = (A_prev, W, b)
A, activation_cache = activate_func(Z)
cache = (linear_cache, (activation_cache, activate_func.__name__))
return (A, cache)
def forward_propagate(X, params):
"""
Forward propagates X through all model layers.
Parameters:
X (list): this layer's input (last layer's output)
params (dict): dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
Returns:
A (<numpy.ndarray>): model output
cache (list): forward propagation caches for backward
[(linear_cache, activation_cache), ...]
"""
caches = []
A = X
nlayers = len(params) // 3
for l in range(1, nlayers+1):
A, cache = forward_propagate_layer(A,
params[f"W{l}"],
params[f"b{l}"],
params[f"A{l}"])
caches.append(cache)
return (A, caches)
###Output
_____no_output_____
###Markdown
Cost Computation
###Code
def compute_cost(Yh, Y):
"""
Computes cost using the cross-entropy / log-loss function
Parameters:
Yh (<numpy.ndarray>): predicted output (y_hat)
Y (<numpy.ndarray>): true output (y)
Returns:
cost (float): cost value
"""
# print(f"Yh: {Yh.shape}")
# print(f"Y : {Y.shape}")
m = float(Y.shape[1])
# cost = ((Y @ np.log(Yh.T)) + ((1 - Y) @ np.log((1-Yh).T))) / -m
cost = (1./m) * (-np.dot(Y,np.log(Yh).T) - np.dot(1-Y, np.log(1-Yh).T))
cost = np.squeeze(cost)
return cost
###Output
_____no_output_____
###Markdown
Backward Propagation
###Code
def backward_propagate_layer(dA, cache):
"""
Applies backward propagation (linear & activation).
Parameters:
dA (<numpy.ndarray>): current layer's post-activation gradient
cache (tuple): forward propagation caches for backward
(linear_cache, (activation_cache, activation_name))
Returns:
dA_prev (<numpy.ndarray>): Gradient with respect to previous layer's input (A_prev)
dW (<numpy.ndarray>): Gradient with respect to current layer's wieghts (W)
db (<numpy.ndarray>): Gradient with respect to previous layer's bias (b)
"""
def relu_backward(dA, cache):
"""
ReLU backward propagation implementation.
Parameters:
dA (<numpy.ndarray>): post-activation gradient
Y (<numpy.ndarray>): activation input (Z)
Returns:
dZ (<numpy.ndarray>): Gradient with respect to activation input (Z)
"""
dZ = np.copy(dA)
dZ[cache <= 0] = 0
return dZ
def sigmoid_backward(dA, cache):
"""
sigmoid backward propagation implementation.
Parameters:
dA (<numpy.ndarray>): post-activation gradient
Y (<numpy.ndarray>): activation input (Z)
Returns:
dZ (<numpy.ndarray>): Gradient with respect to activation input (Z)
"""
s, _ = sigmoid(cache)
dZ = dA * s * (1 - s)
return dZ
def tanh_backward(dA, cache):
"""
tanh backward propagation implementation.
Parameters:
dA (<numpy.ndarray>): post-activation gradient
Y (<numpy.ndarray>): activation input (Z)
Returns:
dZ (<numpy.ndarray>): Gradient with respect to activation input (Z)
"""
t, _ = tanh(cache)
dZ = dA * (1 - np.power(t, 2))
return dZ
def dummy_backward(dA, cache):
"""
sigmoid backward propagation implementation.
Parameters:
dA (<numpy.ndarray>): post-activation gradient
Y (<numpy.ndarray>): activation input (Z)
Returns:
dZ (<numpy.ndarray>): Gradient with respect to activation input (Z)
"""
dA = dZ
return dZ
activation_backward_func = {'relu': relu_backward,
'sigmoid': sigmoid_backward,
'tanh': tanh_backward,
'dummy': dummy_backward}
linear_cache, (activation_cache, activation_name) = cache
# Activation backward propagation
dZ = activation_backward_func[activation_name](dA, activation_cache)
A_prev, W, b = linear_cache
m = float(A_prev.shape[1])
# Linear backward propagation
dA_prev = W.T @ dZ
dW = (dZ @ A_prev.T) / m
db = np.sum(dZ, 1, keepdims=True) / m
return (dA_prev, dW, db)
def backward_propagate(Yh, Y, caches):
"""
Backward propagates Error through all model layers.
Parameters:
Yh (<numpy.ndarray>): predicted output (y_hat)
Y (<numpy.ndarray>): true output (y)
cache (list): forward propagation caches
[(linear_cache, activation_cache), ...]
Returns:
grads (dict): dictionary containing parameters' gradients
"dAn": <numpy.ndarray> weights for layer n (*deprecated)
"dWn": <numpy.ndarray> weights for layer n
"dbn": <numpy.ndarray> bias for layer n
"""
grads = {}
nlayers = len(caches)
# grads[f"dA{nlayers}"] = (Yh - Y) / ((1 - Yh) * Yh)
grads[f"dA{nlayers}"] = - (np.divide(Y, Yh) - np.divide(1 - Y, 1 - Yh))
for l in range(nlayers, 0, -1):
current_cache = caches[l-1]
dA_prev, dW, db = backward_propagate_layer(grads[f"dA{l}"],
current_cache)
grads[f"dA{l-1}"] = dA_prev
grads[f"dW{l}"] = dW
grads[f"db{l}"] = db
return grads
###Output
_____no_output_____
###Markdown
Update Parameters (with Gradient Descent)
###Code
def update_params(params, grads, lr):
"""
Apply Gradient Descent to update parameters using
computed gradients and learning rate.
Parameters:
params (dict): dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
grads (dict): dictionary containing parameters' gradients
"dAn": <numpy.ndarray> weights for layer n (*deprecated)
"dWn": <numpy.ndarray> weights for layer n
"dbn": <numpy.ndarray> bias for layer n
lr (float): learning rate
Returns:
params (dict): *updated dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
"""
nlayers = len(params) // 3
# print(nlayers)
for l in range(1, nlayers+1):
params[f"W{l}"] -= (lr * grads[f"dW{l}"])
params[f"b{l}"] -= (lr * grads[f"db{l}"])
return params
###Output
_____no_output_____
###Markdown
Putting it all together
###Code
type(True)
def my_dnn(X, Y, layers, lr=0.005, niters=100, verbose=False, savefigs=False, cmap = plt.cm.Spectral):
"""
Create a model using "layers" and train the model on X & y.
Parameters:
X (<numpy.ndarray>): input samples
Y (<numpy.ndarray>): samples expected output
layers (tuple): tuple of layers' number of nodes and activation layers(including input layer) tuples
((10, ''), (5, 'relu'), (1, 'sigmoid'))
lr (float): learning rate
niters (int): number of iterations
verbose (bool): displays cost while training
savefigs (bool): visualizes model output while training (use for 2D inputs)
cmap (<maplotlib.pyplot.cm>): colormap for visualization (use when savefigs=True)
Returns:
params (dict): dictionary containing trained weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
"""
if Y.ndim == 1:
Y = np.reshape(Y, (1, -1))
ndigits = len(str(niters))
ndisplays = (niters//10)
nsaves = (niters//20)
costs = []
params = init_params(layers)
print(score(X, y, params))
for i in range(niters+1):
A, caches = forward_propagate(X, params)
cost = compute_cost(A, Y)
grads = backward_propagate(A, Y, caches)
params = update_params(params, grads, lr)
if verbose and not (i % ndisplays):
print(f"Cost at i={i:0{ndigits}} = {cost:.4f}")
if savefigs and not (i % nsaves):
plot_decision_boundary(lambda x: predict(x.T, params), X, Y, figname=f"plt{i:0{ndigits}}", cmap=cmap)
costs.append(cost)
print(score(X, y, params))
return params
def score(X, y, params):
"""
Calculates score of samples using passed parameters.
Parameters:
X (<numpy.ndarray>): samples
Y (<numpy.ndarray>): samples expected output
params (dict): dictionary containing trained weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
Returns:
score (float): model's score/accuracy (0 -> 1)
"""
score = np.mean(predict(X, params)==y)
return score
def predict(X, params):
"""
Apply Gradient Descent to update parameters using
computed gradients and learning rate.
Parameters:
X (<numpy.ndarray>): input samples
params (dict): dictionary containing weights and bias per layer
"Wn": <numpy.ndarray> weights for layer n
"bn": <numpy.ndarray> bias for layer n
"An": (<function>): activation function
Returns:
Yh (<numpy.ndarray>): predicted output (y_hat)
"""
A2, cache = forward_propagate(X, params)
Yh = (A2 > 0.5)
return Yh
def plot_decision_boundary(model, X, y, figname=None, cmap = plt.cm.Spectral):
"""
Plots model output.
"""
# Set min and max values and give it some padding
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
Z = model(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=cmap)
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[0, :], X[1, :], c=y, cmap=cmap)
if figname:
plt.title(f"Decision Boundary on epoch: {int(figname[3:])}")
plt.savefig(f"plots/{figname}")
def load_planar_dataset():
np.random.seed(1)
m = 400 # number of examples
N = int(m/2) # number of points per class
D = 2 # dimensionality
X = np.zeros((m,D)) # data matrix where each row is a single example
Y = np.zeros((m,1), dtype='uint8') # labels vector (0 for red, 1 for blue)
a = 4 # maximum ray of the flower
for j in range(2):
ix = range(N*j,N*(j+1))
t = np.linspace(j*3.12,(j+1)*3.12,N) + np.random.randn(N)*0.2 # theta
r = a*np.sin(4*t) + np.random.randn(N)*0.2 # radius
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
Y[ix] = j
X = X.T
Y = Y.T
return X, Y
X, y = load_planar_dataset()
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=y, s=40, cmap=plt.cm.Spectral);
params = my_dnn(X, y,
((2, ''), *((5, 'tanh'),) * 3, (1, 'sigmoid')),
lr = 0.1,
niters=5000,
verbose=True,
savefigs=True)
plot_decision_boundary(lambda x: predict(x.T, params), X, y)
X, y = datasets.make_moons(800, noise=0.12, random_state=10)
X = X.T
fig, ax = plt.subplots(figsize=(6, 6))
plt.xlabel("X0", fontsize=20)
plt.ylabel("X1", fontsize=20)
plt.scatter(X[0,:], X[1,:], s=60, c=y)
params = my_dnn(X, y,
((2, ''), *((5, 'tanh'),) * 3, (1, 'sigmoid')),
lr = 0.1,
niters=5000,
verbose=True,
savefigs=True)
plot_decision_boundary(lambda x: predict(x.T, params), X, y)
plot_color_gradients('Diverging',
['PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu', 'RdYlBu',
'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic'])
gaussian_quantiles = sklearn.datasets.make_gaussian_quantiles(mean=None, cov=0.5, n_samples=N, n_features=2, n_classes=2, shuffle=True, random_state=None)
X, y = datasets.make_gaussian_quantiles(mean=None, cov=0.5, n_samples=400, n_features=2, n_classes=2, shuffle=True, random_state=None)
X = X.T
#
# Create the plot
#
fig, ax = plt.subplots(figsize=(6, 6))
plt.xlabel("X0", fontsize=20)
plt.ylabel("X1", fontsize=20)
plt.scatter(X[0,:], X[1,:], s=60, c=y)
params = my_dnn(X, y,
((2, ''), *((5, 'tanh'),) * 3, (1, 'sigmoid')),
lr = 0.1,
niters=5000,
verbose=True,
savefigs=True, cmap)
plot_decision_boundary(lambda x: predict(x.T, params), X, y)
###Output
0.535
Cost at i=0000 = 0.6891
Cost at i=0500 = 0.6237
Cost at i=1000 = 0.5741
Cost at i=1500 = 0.5584
Cost at i=2000 = 0.4659
Cost at i=2500 = 0.1316
Cost at i=3000 = 0.0502
Cost at i=3500 = 0.0311
Cost at i=4000 = 0.0227
Cost at i=4500 = 0.0179
Cost at i=5000 = 0.0148
1.0
|
2. Style sheets.ipynb | ###Markdown
Data Visualization Style sheets Bruno Gonçalves www.data4sci.com @bgoncalves, @data4sci
###Code
import numpy as np
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import watermark
%load_ext watermark
%matplotlib inline
%watermark -n -v -m -g -iv
###Output
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
Compiler : Clang 10.0.0
OS : Darwin
Release : 20.3.0
Machine : x86_64
Processor : i386
CPU cores : 16
Architecture: 64bit
Git hash: 16319969764b9d0c9d61909fed335cdc39bc83cd
json : 2.0.9
watermark : 2.1.0
numpy : 1.20.1
matplotlib: 3.3.2
###Markdown
Available styles An alphabetically sorted list of the currently available styles can be found by calling:
###Code
sorted(plt.style.available)
###Output
_____no_output_____
###Markdown
As we can see, there are many available styles, including several defined by __seaborn__, one inspired by __fivethirtyeight__, __ggplot__ and __tableau__. For reference, let's making a simple plot using the default style:
###Code
def make_plot():
x = np.linspace(-np.pi, np.pi, 200)
y = np.sin(x)
plt.plot(x, y, label='sin')
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\sin\left(\theta\right)$')
plt.legend()
make_plot()
###Output
_____no_output_____
###Markdown
Using styles To select another style, we just have to call __plt.style.use()__ with the specified name
###Code
plt.style.use(['default', 'fivethirtyeight'])
###Output
_____no_output_____
###Markdown
If we now try to generate the same figure, it's design will be significantly different
###Code
make_plot()
###Output
_____no_output_____
###Markdown
And for the __ggplot__ style
###Code
plt.style.use(['default', 'ggplot'])
make_plot()
plt.style.use(['default', 'tableau-colorblind10'])
make_plot()
###Output
_____no_output_____
###Markdown
And to recover the original style, we use __default__
###Code
plt.style.use('default')
make_plot()
###Output
_____no_output_____
###Markdown
And the default configuration directory is
###Code
matplotlib.get_configdir()
###Output
_____no_output_____
###Markdown
You can install your custom styles under the __stylelib/__ subdirectory To see what setttings are defined by a specific style, we can consult the __matplotlib.style.library__ dictionary:
###Code
pprint(matplotlib.style.library['fivethirtyeight'])
###Output
{'axes.axisbelow': True,
'axes.edgecolor': 'white',
'axes.facecolor': '#E5E5E5',
'axes.grid': True,
'axes.labelcolor': '#555555',
'axes.labelsize': 'large',
'axes.linewidth': 1.0,
'axes.prop_cycle': cycler('color', ['#E24A33', '#348ABD', '#988ED5', '#777777', '#FBC15E', '#8EBA42', '#FFB5B8']),
'axes.titlesize': 'x-large',
'figure.edgecolor': '0.50',
'figure.facecolor': 'white',
'font.size': 10.0,
'grid.color': 'white',
'grid.linestyle': '-',
'patch.antialiased': True,
'patch.edgecolor': '#EEEEEE',
'patch.facecolor': '#348ABD',
'patch.linewidth': 0.5,
'xtick.color': '#555555',
'xtick.direction': 'out',
'ytick.color': '#555555',
'ytick.direction': 'out'}
###Markdown
The contents of the currently defined rcParams can be directly accessed:
###Code
print(plt.rcParams['figure.figsize'])
###Output
[6.4, 4.8]
###Markdown
or in the case of cyclers (as we have already seen)
###Code
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
print(colors)
###Output
['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
###Markdown
Any changes we make to __rcParams__ reflect imediately in any subsquent figure
###Code
plt.rcParams['lines.linewidth'] = 5
make_plot()
###Output
_____no_output_____
###Markdown
Finally, we should note that instead of passing a style name to __plt.style.use()__ we can simply provide a path or even a URL. So to use a style file defined in the current directory, we can simply do"
###Code
plt.style.use(['default', './d4sci.mplstyle'])
make_plot()
###Output
_____no_output_____
###Markdown
And we can see the contents of the file by doing: (this might not work on windows systems)
###Code
!cat d4sci.mplstyle
###Output
# Data For Science style
# Author: Bruno Goncalves <[email protected]>
# Modified from the matplotlib FiveThirtyEight style by
# Author: Cameron Davidson-Pilon, replicated styles from FiveThirtyEight.com
# See https://www.dataorigami.net/blogs/fivethirtyeight-mpl
lines.linewidth: 4
lines.solid_capstyle: butt
legend.fancybox: true
axes.prop_cycle: cycler('color', ['51a7f9', 'cf51f9', '70bf41', 'f39019', 'f9e351', 'f9517b', '6d904f', '8b8b8b','810f7c'])
axes.labelsize: large
axes.axisbelow: true
axes.grid: true
axes.edgecolor: f0f0f0
axes.linewidth: 3.0
axes.titlesize: x-large
patch.edgecolor: f0f0f0
patch.linewidth: 0.5
svg.fonttype: path
grid.linestyle: -
grid.linewidth: 1.0
xtick.major.size: 0
xtick.minor.size: 0
ytick.major.size: 0
ytick.minor.size: 0
font.size: 24.0
savefig.edgecolor: f0f0f0
savefig.facecolor: f0f0f0
figure.subplot.left: 0.08
figure.subplot.right: 0.95
figure.subplot.bottom: 0.07
figure.figsize: 12.8, 8.8
figure.autolayout: True
figure.dpi: 300
###Markdown
Data Visualization Style sheets Bruno Gonçalves www.data4sci.com @bgoncalves, @data4sci
###Code
import numpy as np
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import watermark
%load_ext watermark
%matplotlib inline
%watermark -n -v -m -g -iv
###Output
json 2.0.9
matplotlib 3.1.3
autopep8 1.5
numpy 1.18.1
watermark 2.0.2
Tue May 26 2020
CPython 3.7.3
IPython 6.2.1
compiler : Clang 4.0.1 (tags/RELEASE_401/final)
system : Darwin
release : 19.4.0
machine : x86_64
processor : i386
CPU cores : 8
interpreter: 64bit
Git hash : 6e4864a09b11398800a72c34feeda13987e75b1a
###Markdown
Available styles An alphabetically sorted list of the currently available styles can be found by calling:
###Code
sorted(plt.style.available)
###Output
_____no_output_____
###Markdown
As we can see, there are many available styles, including several defined by __seaborn__, one inspired by __fivethirtyeight__, __ggplot__ and __tableau__. For reference, let's making a simple plot using the default style:
###Code
def make_plot():
x = np.linspace(-np.pi, np.pi, 200)
y = np.sin(x)
plt.plot(x, y, label='sin')
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\sin\left(\theta\right)$')
plt.legend()
make_plot()
###Output
_____no_output_____
###Markdown
Using styles To select another style, we just have to call __plt.style.use()__ with the specified name
###Code
plt.style.use(['default', 'fivethirtyeight'])
###Output
_____no_output_____
###Markdown
If we now try to generate the same figure, it's design will be significantly different
###Code
make_plot()
###Output
_____no_output_____
###Markdown
And for the __ggplot__ style
###Code
plt.style.use(['default', 'ggplot'])
make_plot()
plt.style.use(['default', 'tableau-colorblind10'])
make_plot()
###Output
_____no_output_____
###Markdown
And to recover the original style, we use __default__
###Code
plt.style.use('default')
make_plot()
###Output
_____no_output_____
###Markdown
And the default configuration directory is
###Code
matplotlib.get_configdir()
###Output
_____no_output_____
###Markdown
You can install your custom styles under the __stylelib/__ subdirectory To see what setttings are defined by a specific style, we can consult the __matplotlib.style.library__ dictionary:
###Code
pprint(matplotlib.style.library['tableau-colorblind10'])
###Output
{'axes.prop_cycle': cycler('color', ['#006BA4', '#FF800E', '#ABABAB', '#595959', '#5F9ED1', '#C85200', '#898989', '#A2C8EC', '#FFBC79', '#CFCFCF']),
'patch.facecolor': '#006BA4'}
###Markdown
The contents of the currently defined rcParams can be directly accessed:
###Code
print(plt.rcParams['figure.figsize'])
###Output
[6.4, 4.8]
###Markdown
or in the case of cyclers (as we have already seen)
###Code
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
print(colors)
###Output
['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
###Markdown
Any changes we make to __rcParams__ reflect imediately in any subsquent figure
###Code
plt.rcParams['lines.linewidth'] = 5
make_plot()
###Output
_____no_output_____
###Markdown
Finally, we should note that instead of passing a style name to __plt.style.use()__ we can simply provide a path or even a URL. So to use a style file defined in the current directory, we can simply do"
###Code
plt.style.use('./d4sci.mplstyle')
make_plot()
###Output
_____no_output_____
###Markdown
And we can see the contents of the file by doing: (this might not work on windows systems)
###Code
!cat d4sci.mplstyle
###Output
# Data For Science style
# Author: Bruno Goncalves <[email protected]>
# Modified from the matplotlib FiveThirtyEight style by
# Author: Cameron Davidson-Pilon, replicated styles from FiveThirtyEight.com
# See https://www.dataorigami.net/blogs/fivethirtyeight-mpl
lines.linewidth: 4
lines.solid_capstyle: butt
legend.fancybox: true
axes.prop_cycle: cycler('color', ['51a7f9', 'cf51f9', '70bf41', 'f39019', 'f9e351', 'f9517b', '6d904f', '8b8b8b','810f7c'])
axes.labelsize: large
axes.axisbelow: true
axes.grid: true
axes.edgecolor: f0f0f0
axes.linewidth: 3.0
axes.titlesize: x-large
patch.edgecolor: f0f0f0
patch.linewidth: 0.5
svg.fonttype: path
grid.linestyle: -
grid.linewidth: 1.0
xtick.major.size: 0
xtick.minor.size: 0
ytick.major.size: 0
ytick.minor.size: 0
font.size: 24.0
savefig.edgecolor: f0f0f0
savefig.facecolor: f0f0f0
figure.subplot.left: 0.08
figure.subplot.right: 0.95
figure.subplot.bottom: 0.07
figure.figsize: 12.8, 8.8
figure.autolayout: True
figure.dpi: 300
###Markdown
Data Visualization Style sheets Bruno Gonçalves www.data4sci.com @bgoncalves, @data4sci
###Code
import numpy as np
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import watermark
%load_ext watermark
%matplotlib inline
%watermark -n -v -m -g -iv
###Output
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
Compiler : Clang 10.0.0
OS : Darwin
Release : 21.2.0
Machine : x86_64
Processor : i386
CPU cores : 16
Architecture: 64bit
Git hash: da5702883953367d5779fa7646b56652805f2bd6
numpy : 1.19.2
json : 2.0.9
watermark : 2.1.0
matplotlib: 3.3.2
###Markdown
Available styles An alphabetically sorted list of the currently available styles can be found by calling:
###Code
sorted(plt.style.available)
###Output
_____no_output_____
###Markdown
As we can see, there are many available styles, including several defined by __seaborn__, one inspired by __fivethirtyeight__, __ggplot__ and __tableau__. For reference, let's making a simple plot using the default style:
###Code
def make_plot():
x = np.linspace(-np.pi, np.pi, 200)
y = np.sin(x)
plt.plot(x, y, label='sin')
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\sin\left(\theta\right)$')
plt.legend()
make_plot()
###Output
_____no_output_____
###Markdown
Using styles To select another style, we just have to call __plt.style.use()__ with the specified name
###Code
plt.style.use(['default', 'fivethirtyeight'])
###Output
_____no_output_____
###Markdown
If we now try to generate the same figure, it's design will be significantly different
###Code
make_plot()
###Output
_____no_output_____
###Markdown
And for the __ggplot__ style
###Code
plt.style.use(['default', 'ggplot'])
make_plot()
plt.style.use(['default', 'tableau-colorblind10'])
make_plot()
###Output
_____no_output_____
###Markdown
And to recover the original style, we use __default__
###Code
plt.style.use('default')
make_plot()
###Output
_____no_output_____
###Markdown
And the default configuration directory is
###Code
matplotlib.get_configdir()
###Output
_____no_output_____
###Markdown
You can install your custom styles under the __stylelib/__ subdirectory To see what setttings are defined by a specific style, we can consult the __matplotlib.style.library__ dictionary:
###Code
pprint(matplotlib.style.library['fivethirtyeight'])
###Output
{'axes.axisbelow': True,
'axes.edgecolor': '#f0f0f0',
'axes.facecolor': '#f0f0f0',
'axes.grid': True,
'axes.labelsize': 'large',
'axes.linewidth': 3.0,
'axes.prop_cycle': cycler('color', ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c']),
'axes.titlesize': 'x-large',
'figure.facecolor': '#f0f0f0',
'figure.subplot.bottom': 0.07,
'figure.subplot.left': 0.08,
'figure.subplot.right': 0.95,
'font.size': 14.0,
'grid.color': '#cbcbcb',
'grid.linestyle': '-',
'grid.linewidth': 1.0,
'legend.fancybox': True,
'lines.linewidth': 4.0,
'lines.solid_capstyle': 'butt',
'patch.edgecolor': '#f0f0f0',
'patch.linewidth': 0.5,
'savefig.edgecolor': '#f0f0f0',
'savefig.facecolor': '#f0f0f0',
'svg.fonttype': 'path',
'xtick.major.size': 0.0,
'xtick.minor.size': 0.0,
'ytick.major.size': 0.0,
'ytick.minor.size': 0.0}
###Markdown
The contents of the currently defined rcParams can be directly accessed:
###Code
print(plt.rcParams['figure.figsize'])
###Output
[6.4, 4.8]
###Markdown
or in the case of cyclers (as we have already seen)
###Code
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
print(colors)
###Output
['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
###Markdown
Any changes we make to __rcParams__ reflect imediately in any subsquent figure
###Code
plt.rcParams['lines.linewidth']
plt.rcParams['lines.linewidth'] = 5
make_plot()
###Output
_____no_output_____
###Markdown
Finally, we should note that instead of passing a style name to __plt.style.use()__ we can simply provide a path or even a URL. So to use a style file defined in the current directory, we can simply do"
###Code
plt.style.use(['default', './d4sci.mplstyle'])
make_plot()
###Output
_____no_output_____
###Markdown
And we can see the contents of the file by doing: (this might not work on windows systems)
###Code
!cat d4sci.mplstyle
###Output
# Data For Science style
# Author: Bruno Goncalves <[email protected]>
# Modified from the matplotlib FiveThirtyEight style by
# Author: Cameron Davidson-Pilon, replicated styles from FiveThirtyEight.com
# See https://www.dataorigami.net/blogs/fivethirtyeight-mpl
lines.linewidth: 4
lines.solid_capstyle: butt
legend.fancybox: true
axes.prop_cycle: cycler('color', ['51a7f9', 'cf51f9', '70bf41', 'f39019', 'f9e351', 'f9517b', '6d904f', '8b8b8b','810f7c'])
axes.labelsize: large
axes.axisbelow: true
axes.grid: true
axes.edgecolor: f0f0f0
axes.linewidth: 3.0
axes.titlesize: x-large
patch.edgecolor: f0f0f0
patch.linewidth: 0.5
svg.fonttype: path
grid.linestyle: -
grid.linewidth: 1.0
xtick.major.size: 0
xtick.minor.size: 0
ytick.major.size: 0
ytick.minor.size: 0
font.size: 24.0
savefig.edgecolor: f0f0f0
savefig.facecolor: f0f0f0
figure.subplot.left: 0.08
figure.subplot.right: 0.95
figure.subplot.bottom: 0.07
figure.figsize: 12.8, 8.8
figure.autolayout: True
figure.dpi: 300
###Markdown
Data Visualization Style sheets Bruno Gonçalves www.data4sci.com @bgoncalves, @data4sci
###Code
import numpy as np
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import watermark
%load_ext watermark
%matplotlib inline
%watermark -n -v -m -g -iv
###Output
json 2.0.9
matplotlib 3.1.3
watermark 2.0.2
numpy 1.18.1
autopep8 1.5
Mon Jun 01 2020
CPython 3.7.3
IPython 6.2.1
compiler : Clang 4.0.1 (tags/RELEASE_401/final)
system : Darwin
release : 19.4.0
machine : x86_64
processor : i386
CPU cores : 8
interpreter: 64bit
Git hash : f388954fe48cfc6df2a30721b0c4d3dfcad2dae5
###Markdown
Available styles An alphabetically sorted list of the currently available styles can be found by calling:
###Code
sorted(plt.style.available)
###Output
_____no_output_____
###Markdown
As we can see, there are many available styles, including several defined by __seaborn__, one inspired by __fivethirtyeight__, __ggplot__ and __tableau__. For reference, let's making a simple plot using the default style:
###Code
def make_plot():
x = np.linspace(-np.pi, np.pi, 200)
y = np.sin(x)
plt.plot(x, y, label='sin')
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\sin\left(\theta\right)$')
plt.legend()
make_plot()
###Output
_____no_output_____
###Markdown
Using styles To select another style, we just have to call __plt.style.use()__ with the specified name
###Code
plt.style.use(['default', 'fivethirtyeight'])
###Output
_____no_output_____
###Markdown
If we now try to generate the same figure, it's design will be significantly different
###Code
make_plot()
###Output
_____no_output_____
###Markdown
And for the __ggplot__ style
###Code
plt.style.use(['default', 'ggplot'])
make_plot()
plt.style.use(['default', 'tableau-colorblind10'])
make_plot()
###Output
_____no_output_____
###Markdown
And to recover the original style, we use __default__
###Code
plt.style.use('default')
make_plot()
###Output
_____no_output_____
###Markdown
And the default configuration directory is
###Code
matplotlib.get_configdir()
###Output
_____no_output_____
###Markdown
You can install your custom styles under the __stylelib/__ subdirectory To see what setttings are defined by a specific style, we can consult the __matplotlib.style.library__ dictionary:
###Code
pprint(matplotlib.style.library['tableau-colorblind10'])
###Output
{'axes.prop_cycle': cycler('color', ['#006BA4', '#FF800E', '#ABABAB', '#595959', '#5F9ED1', '#C85200', '#898989', '#A2C8EC', '#FFBC79', '#CFCFCF']),
'patch.facecolor': '#006BA4'}
###Markdown
The contents of the currently defined rcParams can be directly accessed:
###Code
print(plt.rcParams['figure.figsize'])
###Output
[6.4, 4.8]
###Markdown
or in the case of cyclers (as we have already seen)
###Code
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
print(colors)
###Output
['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
###Markdown
Any changes we make to __rcParams__ reflect imediately in any subsquent figure
###Code
plt.rcParams['lines.linewidth'] = 5
make_plot()
###Output
_____no_output_____
###Markdown
Finally, we should note that instead of passing a style name to __plt.style.use()__ we can simply provide a path or even a URL. So to use a style file defined in the current directory, we can simply do"
###Code
plt.style.use(['default', './d4sci.mplstyle'])
make_plot()
###Output
_____no_output_____
###Markdown
And we can see the contents of the file by doing: (this might not work on windows systems)
###Code
!cat d4sci.mplstyle
###Output
# Data For Science style
# Author: Bruno Goncalves <[email protected]>
# Modified from the matplotlib FiveThirtyEight style by
# Author: Cameron Davidson-Pilon, replicated styles from FiveThirtyEight.com
# See https://www.dataorigami.net/blogs/fivethirtyeight-mpl
lines.linewidth: 4
lines.solid_capstyle: butt
legend.fancybox: true
axes.prop_cycle: cycler('color', ['51a7f9', 'cf51f9', '70bf41', 'f39019', 'f9e351', 'f9517b', '6d904f', '8b8b8b','810f7c'])
axes.labelsize: large
axes.axisbelow: true
axes.grid: true
axes.edgecolor: f0f0f0
axes.linewidth: 3.0
axes.titlesize: x-large
patch.edgecolor: f0f0f0
patch.linewidth: 0.5
svg.fonttype: path
grid.linestyle: -
grid.linewidth: 1.0
xtick.major.size: 0
xtick.minor.size: 0
ytick.major.size: 0
ytick.minor.size: 0
font.size: 24.0
savefig.edgecolor: f0f0f0
savefig.facecolor: f0f0f0
figure.subplot.left: 0.08
figure.subplot.right: 0.95
figure.subplot.bottom: 0.07
figure.figsize: 12.8, 8.8
figure.autolayout: True
figure.dpi: 300
|
notebooks/machine-learning.ipynb | ###Markdown
**ViA / Grado IngInf**curso 2018-19*[Alberto Ruiz](http://dis.um.es/profesores/alberto)*--- Machine Learning scikit-learn Algunos algoritmos sencillos se podrían programar de cero si tuviéramos un poco más de tiempo. En nuestro caso es preferible practicar con la excelente biblioteca [scikit-learn](http://scikit-learn.org/stable/).Es muy sencilla de usar. Por ejemplo, para entrenar un árbol de decisión con el clásico problema de clasificación de flores [IRIS](https://en.wikipedia.org/wiki/Iris_flower_data_set), se hace lo siguiente:
###Code
from sklearn import datasets
dataset = datasets.load_iris()
# dataset.keys()
# print(dataset['DESCR'])
###Output
_____no_output_____
###Markdown
Entrenamos un [árbol de decisión](https://en.wikipedia.org/wiki/Decision_tree_learning) con una parte de los ejemplos, reservando el resto para evaluar su calidad.
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
(train_data , test_data,
train_labels, test_labels) = train_test_split(dataset.data, dataset.target)
model = DecisionTreeClassifier()
model.fit(train_data, train_labels)
print(model)
###Output
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
###Markdown
Ya podemos clasificar casos nuevos:
###Code
model.predict([ [6 , 3 , 3 , 1.5] ])
###Output
_____no_output_____
###Markdown
Un objeto con ese vector de atributos se clasifica dentro de la clase 1, que corresponde a la flor *Iris- Versicolour*. Finalmente, evaluamos la calidad del modelo obtenido con los ejemplos de test.
###Code
from sklearn import metrics
expected = test_labels
predicted = model.predict(test_data)
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 16
1 0.75 0.90 0.82 10
2 0.90 0.75 0.82 12
micro avg 0.89 0.89 0.89 38
macro avg 0.88 0.88 0.88 38
weighted avg 0.90 0.89 0.89 38
[[16 0 0]
[ 0 9 1]
[ 0 3 9]]
###Markdown
El resultado depende de la partición aleatoria de los ejemplos, pero normalmente se clasifican casi todos bien. En realidad es un problema de clasificación muy sencillo. MNIST dataset Nuestro objetivo es construir un sistema que reconozca números manuscritos en imágenes tomadas con una cámara. Para ello vamos a aprovechar la conocida base de datos MNIST:http://yann.lecun.com/exdb/mnist/*machine learning hello world*
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
mnist = np.load("data/mnist.npz")
list(mnist.keys())
xl,yl,xt,yt = [mnist[d] for d in ['xl', 'yl', 'xt', 'yt']]
cl = np.argmax(yl,axis=1)
ct = np.argmax(yt,axis=1)
print(xl.shape, yl.shape, cl.shape)
print(xt.shape, yt.shape, ct.shape)
def shdig(v):
x = np.reshape(v,[28,28])
plt.imshow(1-x, 'gray', vmin=0, vmax=1, interpolation="nearest");
shdig(xl[5])
def muestrario(imgs,n=10):
N = len(imgs)
c = N // n
r = N % n
L = imgs + [np.zeros_like(imgs[0]) for k in range(n-r)]
return np.vstack([ np.hstack([ x for x in L[n*k : n*(k+1)]]) for k in range(c if n*c==N else c+1)])
plt.figure(figsize=(8,8))
plt.imshow(-muestrario([x.reshape(28,28) for x in xl[:100]]),'gray');
plt.axis('off');
shdig(xl[68])
print(yl[68])
print(cl[68])
###Output
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
7
###Markdown
Reducción de dimensión La dimensión de los vectores de características es relativamente grande (28x28=784). Mediante el [análisis de componentes principales (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) esa dimensión se puede reducir sin demasiada pérdida de información.
###Code
from sklearn import decomposition
pca = decomposition.PCA(n_components=20)
pca.fit(xl)
comprime = pca.transform
descomprime = pca.inverse_transform
tr = comprime(xl)
###Output
_____no_output_____
###Markdown
Proyección 2D
###Code
plt.figure(figsize=(6,6))
plt.plot(*tr[cl!=1][:,[0,1]].T,'.',markerSize=1,alpha=0.1,color='gray');
plt.plot(*tr[cl==1][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='blue');
plt.figure(figsize=(6,6))
plt.plot(*tr[(cl!=3) & (cl!=8)][:,[0,1]].T,'.',markerSize=1,alpha=0.1,color='gray');
plt.plot(*tr[cl==3][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='blue');
plt.plot(*tr[cl==8][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='red');
###Output
_____no_output_____
###Markdown
Calidad de la reconstrucción
###Code
k = 2
plt.figure(figsize=(10,5))
plt.subplot(121)
shdig(xl[k])
plt.subplot(122)
shdig(descomprime(comprime([xl[k]]))[0])
###Output
_____no_output_____
###Markdown
Modos de variación
###Code
treses = xl[cl==3]
print(treses.shape)
shdig(treses[0])
plt.figure(figsize=(8,8))
plt.imshow(-np.bmat([[ x.reshape(28,28) for x in treses[10*k:10*(k+1)] ]
for k in range(10)]),'gray'); plt.axis('off');
M = np.mean(treses,axis=0)
shdig(M)
C = np.cov(treses.T)
l,V = np.linalg.eigh(C)
V = np.flipud(V.T)
plt.figure(figsize=(12,4))
plt.imshow(-np.bmat([[ (V[k]).reshape(28,28) for k in range(10)]]),'gray'); plt.axis('off');
shdig(M + 3*V[0])
r = np.linspace(-7,7,11)
plt.imshow(np.bmat([[ (M + a*V[0]).reshape(28,28) for a in r]]),'gray');
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ (M + a*V[0]).reshape(28,28) for a in r]]),'gray',vmin=0,vmax=1);
plt.axis('off');
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ (M + a*V[1]).reshape(28,28) for a in r]]),'gray',vmin=0,vmax=1);
plt.axis('off');
plt.figure(figsize=(8,8))
plt.imshow(1-np.bmat([[ (M + a*V[0] + b*V[1]).reshape(28,28) for a in r] for b in r]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Clasificador Gaussiano Usamos scikit-learn para construir un clasificador basado clases gaussianas y reducción de dimensión mediante componentes principales (PCA).
###Code
from sklearn import random_projection, decomposition, naive_bayes, discriminant_analysis
from sklearn.metrics import confusion_matrix
def acc(maq,x,y):
return 100*(y == maq.predict(x)).sum() / len(y)
#transformer = random_projection.GaussianRandomProjection(n_components=60).fit(xl)
transformer = decomposition.PCA(n_components=40).fit(xl)
xrl = transformer.transform(xl)
xrt = transformer.transform(xt)
###Output
_____no_output_____
###Markdown
Un clasificador "naive Bayes" tiene más de un 12% de errores, mientras que el gaussiano completo consigue menos de 4%:
###Code
gnb = naive_bayes.GaussianNB()
maq = gnb.fit(xrl, cl)
acc(maq,xrt,ct)
maq = discriminant_analysis.QuadraticDiscriminantAnalysis(store_covariance=True).fit(xrl,cl)
acc(maq,xrt,ct)
confusion_matrix(ct, maq.predict(xrt))
###Output
_____no_output_____
###Markdown
Podemos clasificar cualquier imagen en el formato 28x28 adecuado:
###Code
dig = xt[1234]
shdig(dig)
maq.predict(transformer.transform(dig.reshape(1,-1)))
###Output
_____no_output_____
###Markdown
(Se hace `reshape` porque la máquina clasifica conjuntos de vectores de características como filas de una matriz.) Imagen real Para que los clasificadores funcionen bien con imágenes reales es necesario [normalizarlas](http://yann.lecun.com/exdb/mnist/) para que tengan el mismo tamaño y posición que los ejemplos de entrenamiento.
###Code
import cv2 as cv
digits = cv.cvtColor(cv.imread('images/mydigits.png'),cv.COLOR_BGR2RGB);
plt.imshow(digits);
ret, gt = cv.threshold(cv.cvtColor(digits,cv.COLOR_RGB2GRAY),189,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
plt.imshow(gt,'gray');
def center(p):
r,c = p.shape
rs = np.outer(range(r),np.ones(c))
cs = np.outer(np.ones(r),range(c))
s = np.sum(p)
my = np.sum(p*rs) / s
mx = np.sum(p*cs) / s
return mx,my
def boundingBox(c):
(x1, y1), (x2, y2) = c.min(0), c.max(0)
return (x1, y1), (x2, y2)
def adaptsize(x):
h,w = x.shape
s = max(h,w)
h2 = (s-h)//2
w2 = (s-w)//2
if h2==0:
z1 = np.zeros([s,w2])
z2 = np.zeros([s,s-w-w2])
y = np.hstack([z1,x,z2])
else:
z1 = np.zeros([h2,s])
z2 = np.zeros([s-h-h2,s])
y = np.vstack([z1,x,z2])
y = cv.resize(y,(20,20))/255
mx,my = center(y)
H = np.array([[1.,0,4-(mx-9.5)],[0,1,4-(my-9.5)]])
return cv.warpAffine(y,H,(28,28))
a,contours,b = cv.findContours(255-gt, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
ok = [ boundingBox(x.reshape(len(x),2)) for x in contours ]
ok = [ adaptsize(255-gt[y1:y2,x1:x2]) for (x1,y1),(x2,y2) in ok ]
plt.imshow(-ok[3],'gray');
###Output
_____no_output_____
###Markdown
Una vez hecho esto se pueden utilizar con el clasificador igual que antes:
###Code
dig = ok[1].flatten()
shdig(dig)
maq.predict(transformer.transform(dig.reshape(1,-1)))
digits = np.array(ok).reshape(-1,28*28)
plt.imshow(-np.hstack([x.reshape(28,28) for x in ok]),'gray'); plt.axis('off');
maq.predict(transformer.transform(digits))
###Output
_____no_output_____
###Markdown
Validez del modelo gaussiano Si el modelo gaussiano de la distribución de clases es correcto podríamos generar muestras sintéticas realistas. Muestras sintéticas
###Code
C = np.array([[4,-3],[-3,5]])
if False:
kk = np.random.multivariate_normal((0,0),C,1000)
else:
CC = np.linalg.cholesky(C) # ojo
kk = np.random.randn(1000,2) @ CC.T
plt.figure(figsize=(4,4))
plt.plot(*kk.T,'.');
plt.axis('equal');
print(np.mean(kk,axis=0))
print(np.cov(kk.T))
from sklearn import decomposition
selected = xl[cl==3]
pca = decomposition.PCA(n_components=5)
pca.fit(selected)
#pca.fit(xl)
tr = pca.transform(selected)
k = 5
plt.figure(figsize=(8,4))
plt.subplot(121)
shdig(selected[k])
plt.axis('off');
plt.subplot(122)
shdig(pca.inverse_transform(tr[[k]])[0])
plt.axis('off');
M = np.mean(tr,axis=0)
C = np.cov(tr.T)
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ pca.inverse_transform([np.random.multivariate_normal(M,C)])[0].reshape(28,28) for _ in range(11)]]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Otra posibilidad es hacer un [QQ plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot) para comparar gráficamente las distribución de distancias de Mahalanobis, que es chi cuadrado. Caso de prueba con una gaussiana real:
###Code
from scipy.stats import chi2
df = 10
data = np.sum(np.random.randn(1000,df)**2,axis=1)
rv = chi2(df)
x = sorted(data)
n = len(x)
y = np.linspace(1/n,1,n)
y = np.arange(n)/n
plt.figure(figsize=(12,12))
plt.subplot(221)
plt.hist(data,bins=20,edgecolor='black',density=True);
X = np.linspace(min(data),max(data),50)
plt.plot(X,rv.pdf(X));
plt.subplot(222)
plt.plot(x, rv.cdf(x), lw=7,color='gray');
plt.plot(x,y,color='black');
plt.subplot(223)
plt.plot(y,rv.cdf(x));
plt.plot([0,1],[0,1],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('PP Plot')
plt.subplot(224)
plt.plot(x, rv.ppf(y))
mn = np.min(x)
mx = np.max(x)
plt.plot([mn,mx],[mn,mx],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('QQ Plot');
#print(mn,mx)
###Output
_____no_output_____
###Markdown
Con los dígitos seleccionados:
###Code
def distMah2(m,ic,v):
return (v-m) @ ic @ (v-m)
def dm(m,c):
ic = np.linalg.inv(c)
return lambda v: distMah2(m,ic,v)
d = dm(M,C)
data = [d(x) for x in tr]
df = len(M)
rv = chi2(df)
x = sorted(data)
n = len(x)
y = np.linspace(1/n,1,n)
y = np.arange(n)/n
plt.figure(figsize=(12,12))
plt.subplot(221)
plt.hist(data,bins=20,edgecolor='black',density=True);
X = np.linspace(min(data),max(data),50)
plt.plot(X,rv.pdf(X));
plt.subplot(222)
plt.plot(x, rv.cdf(x), lw=7,color='gray');
plt.plot(x,y,color='black');
plt.subplot(223)
plt.plot(y,rv.cdf(x));
plt.plot([0,1],[0,1],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('PP Plot')
plt.subplot(224)
plt.plot(x, rv.ppf(y))
mn = np.min(x)
mx = np.max(x)
plt.plot([mn,mx],[mn,mx],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('QQ Plot');
#print(mn,mx)
###Output
_____no_output_____
###Markdown
No es exactamente normal. A pesar de ello, si las nubes no están muy solapadas el clasificador se comportará bien. Objetos extremos
###Code
raro=np.argmax(data)
shdig(selected[raro])
raros = sorted(range(len(selected)),key=lambda k:d(tr[k]))
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ selected[raros[-k]].reshape(28,28) for k in range(1,11)]]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Regularización Para conseguir **generalización** es necesario controlar la capacidad de la máquinas de aprendizaje. Vamos a ilustrar este principio con una máquina lineal. Seleccionamos dos clases y ponemos las salidas deseadas de la máquina a valores +1 y -1:
###Code
n = 100
ca = 4
cb = 9
# seleccionamos las posiciones de las clases que nos interesan
sel_l = (cl == ca) | (cl==cb)
sel_t = (ct == ca) | (ct==cb)
# extraemos esas posiciones
# x e y seleccionadas para aprendizaje
# usaré solo los n primeros para aprender
xsl = xl[sel_l][:n]
ysl = cl[sel_l].astype(int)[:n]
# y ponemos correctamente los valores deseados, positivo o negativo
ysl[ysl==ca] = 1
ysl[ysl==cb] = -1
# y lo mismo para el x e y seleccionadas para test (evaluación independiente)
xst = xt[sel_t]
yst = ct[sel_t].astype(int)
yst[yst==ca] = 1
yst[yst==cb] = -1
np.sum(sel_l)
def shdig(v):
x = np.reshape(v,[28,28])
plt.imshow(1-x, 'gray', vmin=0, vmax=1, interpolation="nearest");
k1,k2 = 55, 56
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xsl[k1])
plt.title(ysl[k1])
plt.subplot(1,2,2)
shdig(xsl[k2])
plt.title(ysl[k2]);
xsl.shape
yst
###Output
_____no_output_____
###Markdown
conveniente para añadir el término independiente (offset) a una máquina lineal
###Code
def homog(x):
r,c = x.shape
return np.hstack([x, np.ones([r,1])])
###Output
_____no_output_____
###Markdown
solución de mínimos cuadrados para un sistema lineal Deseo encontrar $W$ tal que `xsl @ w = ysel`O sea, resolver $X w= y$Usarmos `lstsq` del módulo de álgebra lineal `numpy.linalg`, que obtiene la solución de mínimo error cuadrático de un sistema (ver el notebook de [sistemas de ecuaciones](sistecs.ipynb)). `lstsq` no es lo ideal para mostrar este efecto en el caso no regularizado, porque para sistemas subdeterminados obtiene la solución de mínima norma, y por tanto, también regulariza.
###Code
W,_,_,_ = la.lstsq(homog(xsl),ysl)
#W
#homog(xsl) @ W
#np.sign(homog(xsl) @ W) == np.sign(ysl)
###Output
_____no_output_____
###Markdown
contamos los aciertos
###Code
np.sum(np.sign(homog(xsl) @ W) == np.sign(ysl)), len(ysl)
###Output
_____no_output_____
###Markdown
Tiene buena pinta, acierta todos los ejemplos de entrenamiento.
###Code
np.sign(homog(xst) @ W) == np.sign(yst)
np.sum(np.sign(homog(xst) @ W) == np.sign(yst)), len(yst)
k1,k2 = 55, 56
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xsl[k1])
plt.title((homog(xsl) @ W)[k1])
plt.subplot(1,2,2)
shdig(xsl[k2])
plt.title((homog(xsl) @ W)[k2]);
###Output
_____no_output_____
###Markdown
Obtiene exactamente los valores deseados $\pm 1$, ya que tiene más grados de libertad (coeficientes ajustables) que restricciones (ecuaciones, número de ejemplos de entrenamiento). Esto inspira poca confianza en el comportamiento con ejemplos desconocidos:
###Code
k1,k2 = 70, 55
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xst[k1])
plt.title((homog(xst) @ W)[k1])
plt.subplot(1,2,2)
shdig(xst[k2])
plt.title((homog(xst) @ W)[k2]);
###Output
_____no_output_____
###Markdown
Vamos a construir una solución regularizada, que penaliza con un peso $\lambda$ el tamaño de los coeficientes, para que se reduzca la interpolación de detalles irrelevantes. La solución regularizada es muy parecida a la de mínimos cuadrados, pero hay que "inflar" la covarianza $X^TX$ con $\lambda$. En lugar de $w = (X^T X) ^{-1} X^T y$(esto es lo que hace internamente lstsq, es la "pseudoinversa" de X, por y)hacemos$w = (X^T X + \lambda I) ^{-1} X^T y$
###Code
lam = 2E2
D = np.diag(lam*np.ones([784+1]))
D[-1,-1] = 0
# el coeficiente b no se regulariza,
# porque la posición del hiperplano puede ser cualquiera, no hay que
# promover que se acerque al origen
#D
xh = homog(xsl)
Wr = la.solve(xh.T @ xh + D, xh.T @ ysl)
np.sum(np.sign(homog(xsl) @ Wr) == np.sign(ysl)), len(ysl)
np.sum(np.sign(homog(xst) @ Wr) == np.sign(yst)), len(yst)
###Output
_____no_output_____
###Markdown
**Ejercicio**: crea una curva comparando $E_L$ con $E_T$ para valores crecientes de $\lambda$.
###Code
Lam = [0.01, 0.1, 1, 5, 10, 50, 100, 200, 500, 1000, 2000, 3000, 5000]
def regu():
xh = homog(xsl)
L = []
T = []
for l in Lam:
lam = 2E2
D = np.diag(l*np.ones([784+1]))
D[-1,-1] = 0
Wr = la.solve(xh.T @ xh + D, xh.T @ ysl)
EL = np.sum(np.sign(homog(xsl) @ Wr) == np.sign(ysl)), len(ysl)
ET = np.sum(np.sign(homog(xst) @ Wr) == np.sign(yst)), len(yst)
L.append(EL[0]/EL[1])
T.append(ET[0]/ET[1])
return 1-np.array(L), 1-np.array(T)
plt.figure(figsize=(8,6))
l,t = regu()
plt.plot(100*l,'o-',label='training',color='red')
plt.plot(100*t,'o-',label='test',color='green')
plt.xticks(np.arange(12), Lam, rotation=45)
plt.legend()
plt.xlabel('$\lambda$'); plt.ylabel('error %')
plt.title('Regularization');
###Output
_____no_output_____
###Markdown
Esta gráfica ilustra el principio teórico fundamental de *machine learning*: la **generalización** está relacionada con la **capacidad** de la máquina. *Adversarial examples* Es posible sintetizar instancias aparentemente inocentes pero que confunden al clasificador. Gaussian classifier
###Code
from sklearn import decomposition, discriminant_analysis
def acc(maq,x,y):
return 100*(y == maq.predict(x)).sum() / len(y)
transformer = decomposition.PCA(n_components=40).fit(xl)
xrl = transformer.transform(xl)
xrt = transformer.transform(xt)
###Output
_____no_output_____
###Markdown
Un clasificador "naive Bayes" tiene más de un 12% de errores, mientras que el gaussiano completo consigue menos de 4%:
###Code
maq = discriminant_analysis.QuadraticDiscriminantAnalysis(store_covariance=True).fit(xrl,cl)
acc(maq,xrt,ct)
###Output
_____no_output_____
###Markdown
Adversarial examples
###Code
def mkg(transformer,maquina,cl,v):
d0 = transformer.transform([v])[0] - maquina.means_[cl]
d1 = np.linalg.inv(maquina.covariance_[cl]) @ d0
d2 = transformer.inverse_transform(d1)
return d2
cdesired = 5
k = 1234
v0 = xt[k]
v = v0
corig = ct[k]
shdig(v0); plt.title(corig);
redu = transformer.transform([v])
maq.predict_proba(redu)[0][[cdesired,corig]]
for _ in range(10):
g = mkg(transformer, maq, corig, v) - mkg(transformer, maq, cdesired, v)
v = np.clip(v + 0.01*g, 0, 1)
redu = transformer.transform([v])
cp = maq.predict(redu)[0]
if cp != corig: break
shdig(v)
plt.title(cp)
maq.predict_proba(redu)[0][[cdesired,corig]]
shdig(abs(v-v0))
print(np.sum(abs(v-v0)))
###Output
15.84989
###Markdown
Random inputs
###Code
v0 = np.random.rand(28,28).flatten()
shdig(v0)
v = v0
redu = transformer.transform([v])
plt.title(maq.predict(redu)[0]);
maq.predict_proba(redu)[0].max()
cdesired = 0
for _ in range(3):
g = - mkg(transformer, maq, cdesired, v)
v = np.clip(v + 0.01*g, 0, 1)
redu = transformer.transform([v])
cp = maq.predict(redu)[0]
shdig(v)
plt.title(cp)
maq.predict_proba(redu)[0][cdesired]
maq.predict_proba(redu)[0]
shdig(abs(v-v0))
print(np.sum(abs(v-v0)))
###Output
10.46400383834251
###Markdown
Otras máquinas de aprendizaje Naive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
maq = gnb.fit(xl, cl)
acc(maq,xt,ct)
maq.predict(digits)
maq.sigma_ = maq.sigma_ * 0 + 1
acc(maq,xt,ct)
maq.predict(digits)
###Output
_____no_output_____
###Markdown
Support vector machine (SVM)
###Code
from sklearn import svm
classifier = svm.SVC(gamma=0.01, C=0.1)
#classifier = svm.SVC(gamma=0.001)
classifier.kernel
maq = classifier.fit(xl[:5000], cl[:5000])
maq.support_vectors_.shape
acc(maq,xt,ct)
maq.predict(digits)
#import pickle
#s = pickle.dumps(maq)
#from sklearn.externals import joblib
#joblib.dump(maq, 'svm.pkl')
#maq = joblib.load('svm.pkl')
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
from sklearn import ensemble
clf = ensemble.GradientBoostingClassifier(subsample=0.1, n_estimators=50, max_features=50, min_samples_split=10)
clf.fit(xl, cl)
clf.score(xl,cl), clf.score(xt,ct)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
clf = ensemble.RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf.fit(xl, cl)
clf.score(xl,cl), clf.score(xt,ct)
###Output
_____no_output_____
###Markdown
CNN Red convolucional profunda (ver [deep learning](tensorflow.ipynb)).
###Code
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Dropout, Softmax, Flatten
model = Sequential()
model.add(Conv2D(input_shape=(28,28,1), filters=32, kernel_size=(5,5), strides=1,
padding='same', use_bias=True, activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(5,5), strides=1,
padding='same', use_bias=True, activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(1024))
model.add(Dropout(rate=0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
if False:
model.fit(xl.reshape(-1,28,28,1), yl, epochs=50, batch_size=500)
#model.save('digits.keras')
else:
#wget https://robot.inf.um.es/material/va/digits.keras
model.load_weights('../data/models/digits.keras')
model.evaluate(xt.reshape(-1,28,28,1),yt, batch_size=500)
plt.imshow(-np.hstack([x.reshape(28,28) for x in ok]),'gray'); plt.axis('off');
model.predict_classes(np.array(ok).reshape(-1,28,28,1))
###Output
_____no_output_____
###Markdown
**ViA / Grado IngInf**curso 2018-19*[Alberto Ruiz](http://dis.um.es/profesores/alberto)*--- Machine Learning scikit-learn Algunos algoritmos sencillos se podrían programar de cero si tuviéramos un poco más de tiempo. En nuestro caso es preferible practicar con la excelente biblioteca [scikit-learn](http://scikit-learn.org/stable/).Es muy sencilla de usar. Por ejemplo, para entrenar un árbol de decisión con el clásico problema de clasificación de flores [IRIS](https://en.wikipedia.org/wiki/Iris_flower_data_set), se hace lo siguiente:
###Code
from sklearn import datasets
dataset = datasets.load_iris()
# dataset.keys()
# print(dataset['DESCR'])
###Output
_____no_output_____
###Markdown
Entrenamos un [árbol de decisión](https://en.wikipedia.org/wiki/Decision_tree_learning) con una parte de los ejemplos, reservando el resto para evaluar su calidad.
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
(train_data , test_data,
train_labels, test_labels) = train_test_split(dataset.data, dataset.target)
model = DecisionTreeClassifier()
model.fit(train_data, train_labels)
print(model)
###Output
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
###Markdown
Ya podemos clasificar casos nuevos:
###Code
model.predict([ [6 , 3 , 3 , 1.5] ])
###Output
_____no_output_____
###Markdown
Un objeto con ese vector de atributos se clasifica dentro de la clase 1, que corresponde a la flor *Iris- Versicolour*. Finalmente, evaluamos la calidad del modelo obtenido con los ejemplos de test.
###Code
from sklearn import metrics
expected = test_labels
predicted = model.predict(test_data)
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 16
1 0.89 1.00 0.94 16
2 1.00 0.67 0.80 6
micro avg 0.95 0.95 0.95 38
macro avg 0.96 0.89 0.91 38
weighted avg 0.95 0.95 0.94 38
[[16 0 0]
[ 0 16 0]
[ 0 2 4]]
###Markdown
El resultado depende de la partición aleatoria de los ejemplos, pero normalmente se clasifican casi todos bien. En realidad es un problema de clasificación muy sencillo. MNIST dataset Nuestro objetivo es construir un sistema que reconozca números manuscritos en imágenes tomadas con una cámara. Para ello vamos a aprovechar la conocida base de datos MNIST:http://yann.lecun.com/exdb/mnist/*machine learning hello world*
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
mnist = np.load("../data/mnist.npz")
list(mnist.keys())
xl,yl,xt,yt = [mnist[d] for d in ['xl', 'yl', 'xt', 'yt']]
cl = np.argmax(yl,axis=1)
ct = np.argmax(yt,axis=1)
print(xl.shape, yl.shape, cl.shape)
print(xt.shape, yt.shape, ct.shape)
def shdig(v):
x = np.reshape(v,[28,28])
plt.imshow(1-x, 'gray', vmin=0, vmax=1, interpolation="nearest");
shdig(xl[5])
def muestrario(imgs,n=10):
N = len(imgs)
c = N // n
r = N % n
L = imgs + [np.zeros_like(imgs[0]) for k in range(n-r)]
return np.vstack([ np.hstack([ x for x in L[n*k : n*(k+1)]]) for k in range(c if n*c==N else c+1)])
plt.figure(figsize=(8,8))
plt.imshow(-muestrario([x.reshape(28,28) for x in xl[:100]]),'gray');
plt.axis('off');
shdig(xl[68])
print(yl[68])
print(cl[68])
###Output
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
7
###Markdown
Reducción de dimensión La dimensión de los vectores de características es relativamente grande (28x28=784). Mediante el [análisis de componentes principales (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) esa dimensión se puede reducir sin demasiada pérdida de información.
###Code
from sklearn import decomposition
pca = decomposition.PCA(n_components=20)
pca.fit(xl)
comprime = pca.transform
descomprime = pca.inverse_transform
tr = comprime(xl)
###Output
_____no_output_____
###Markdown
Proyección 2D
###Code
plt.figure(figsize=(6,6))
plt.plot(*tr[cl!=1][:,[0,1]].T,'.',markerSize=1,alpha=0.1,color='gray');
plt.plot(*tr[cl==1][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='blue');
plt.figure(figsize=(6,6))
plt.plot(*tr[(cl!=3) & (cl!=8)][:,[0,1]].T,'.',markerSize=1,alpha=0.1,color='gray');
plt.plot(*tr[cl==3][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='blue');
plt.plot(*tr[cl==8][:,[0,1]].T,'.',markerSize=1,alpha=0.2,color='red');
###Output
_____no_output_____
###Markdown
Calidad de la reconstrucción
###Code
k = 2
plt.figure(figsize=(10,5))
plt.subplot(121)
shdig(xl[k])
plt.subplot(122)
shdig(descomprime(comprime([xl[k]]))[0])
###Output
_____no_output_____
###Markdown
Modos de variación
###Code
treses = xl[cl==3]
print(treses.shape)
shdig(treses[0])
plt.figure(figsize=(8,8))
plt.imshow(-np.bmat([[ x.reshape(28,28) for x in treses[10*k:10*(k+1)] ]
for k in range(10)]),'gray'); plt.axis('off');
M = np.mean(treses,axis=0)
shdig(M)
C = np.cov(treses.T)
l,V = np.linalg.eigh(C)
V = np.flipud(V.T)
plt.figure(figsize=(12,4))
plt.imshow(-np.bmat([[ (V[k]).reshape(28,28) for k in range(10)]]),'gray'); plt.axis('off');
shdig(M + 3*V[0])
r = np.linspace(-7,7,11)
plt.imshow(np.bmat([[ (M + a*V[0]).reshape(28,28) for a in r]]),'gray');
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ (M + a*V[0]).reshape(28,28) for a in r]]),'gray',vmin=0,vmax=1);
plt.axis('off');
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ (M + a*V[1]).reshape(28,28) for a in r]]),'gray',vmin=0,vmax=1);
plt.axis('off');
plt.figure(figsize=(8,8))
plt.imshow(1-np.bmat([[ (M + a*V[0] + b*V[1]).reshape(28,28) for a in r] for b in r]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Clasificador Gaussiano Usamos scikit-learn para construir un clasificador basado clases gaussianas y reducción de dimensión mediante componentes principales (PCA).
###Code
from sklearn import random_projection, decomposition, naive_bayes, discriminant_analysis
from sklearn.metrics import confusion_matrix
def acc(maq,x,y):
return 100*(y == maq.predict(x)).sum() / len(y)
#transformer = random_projection.GaussianRandomProjection(n_components=60).fit(xl)
transformer = decomposition.PCA(n_components=40).fit(xl)
xrl = transformer.transform(xl)
xrt = transformer.transform(xt)
###Output
_____no_output_____
###Markdown
Un clasificador "naive Bayes" tiene más de un 12% de errores, mientras que el gaussiano completo consigue menos de 4%:
###Code
gnb = naive_bayes.GaussianNB()
maq = gnb.fit(xrl, cl)
acc(maq,xrt,ct)
maq = discriminant_analysis.QuadraticDiscriminantAnalysis(store_covariance=True).fit(xrl,cl)
acc(maq,xrt,ct)
confusion_matrix(ct, maq.predict(xrt))
###Output
_____no_output_____
###Markdown
Podemos clasificar cualquier imagen en el formato 28x28 adecuado:
###Code
dig = xt[1234]
shdig(dig)
maq.predict(transformer.transform(dig.reshape(1,-1)))
###Output
_____no_output_____
###Markdown
(Se hace `reshape` porque la máquina clasifica conjuntos de vectores de características como filas de una matriz.) Imagen real Para que los clasificadores funcionen bien con imágenes reales es necesario [normalizarlas](http://yann.lecun.com/exdb/mnist/) para que tengan el mismo tamaño y posición que los ejemplos de entrenamiento.
###Code
import cv2 as cv
digits = cv.cvtColor(cv.imread('../images/mydigits.png'),cv.COLOR_BGR2RGB);
plt.imshow(digits);
ret, gt = cv.threshold(cv.cvtColor(digits,cv.COLOR_RGB2GRAY),189,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
plt.imshow(gt,'gray');
def center(p):
r,c = p.shape
rs = np.outer(range(r),np.ones(c))
cs = np.outer(np.ones(r),range(c))
s = np.sum(p)
my = np.sum(p*rs) / s
mx = np.sum(p*cs) / s
return mx,my
def boundingBox(c):
(x1, y1), (x2, y2) = c.min(0), c.max(0)
return (x1, y1), (x2, y2)
def adaptsize(x):
h,w = x.shape
s = max(h,w)
h2 = (s-h)//2
w2 = (s-w)//2
y = x
if w2>0:
z1 = np.zeros([s,w2])
z2 = np.zeros([s,s-w-w2])
y = np.hstack([z1,x,z2])
if h2>0:
z1 = np.zeros([h2,s])
z2 = np.zeros([s-h-h2,s])
y = np.vstack([z1,x,z2])
y = cv.resize(y,(20,20))/255
mx,my = center(y)
H = np.array([[1.,0,4-(mx-9.5)],[0,1,4-(my-9.5)]])
return cv.warpAffine(y,H,(28,28))
contours,_ = cv.findContours(255-gt, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)[-2:]
regions = [ boundingBox(x.reshape(-1,2)) for x in contours ]
raw = [ 255-gt[y1:y2,x1:x2] for (x1,y1),(x2,y2) in regions if x2-x1 > 10 and y2-y1 > 10]
ok = [ adaptsize(x) for x in raw ]
plt.imshow(-ok[3],'gray');
###Output
_____no_output_____
###Markdown
Una vez hecho esto se pueden utilizar con el clasificador igual que antes:
###Code
dig = ok[1].flatten()
shdig(dig)
maq.predict(transformer.transform(dig.reshape(1,-1)))
digits = np.array(ok).reshape(-1,28*28)
plt.imshow(-np.hstack([x.reshape(28,28) for x in ok]),'gray'); plt.axis('off');
maq.predict(transformer.transform(digits))
###Output
_____no_output_____
###Markdown
Validez del modelo gaussiano Si el modelo gaussiano de la distribución de clases es correcto podríamos generar muestras sintéticas realistas. Muestras sintéticas
###Code
C = np.array([[4,-3],[-3,5]])
if False:
kk = np.random.multivariate_normal((0,0),C,1000)
else:
CC = np.linalg.cholesky(C) # ojo
kk = np.random.randn(1000,2) @ CC.T
plt.figure(figsize=(4,4))
plt.plot(*kk.T,'.');
plt.axis('equal');
print(np.mean(kk,axis=0))
print(np.cov(kk.T))
from sklearn import decomposition
selected = xl[cl==3]
pca = decomposition.PCA(n_components=5)
pca.fit(selected)
#pca.fit(xl)
tr = pca.transform(selected)
k = 5
plt.figure(figsize=(8,4))
plt.subplot(121)
shdig(selected[k])
plt.axis('off');
plt.subplot(122)
shdig(pca.inverse_transform(tr[[k]])[0])
plt.axis('off');
M = np.mean(tr,axis=0)
C = np.cov(tr.T)
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ pca.inverse_transform([np.random.multivariate_normal(M,C)])[0].reshape(28,28) for _ in range(11)]]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Otra posibilidad es hacer un [QQ plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot) para comparar gráficamente las distribución de distancias de Mahalanobis, que es chi cuadrado. Caso de prueba con una gaussiana real:
###Code
from scipy.stats import chi2
df = 10
data = np.sum(np.random.randn(1000,df)**2,axis=1)
rv = chi2(df)
x = sorted(data)
n = len(x)
y = np.linspace(1/n,1,n)
y = np.arange(n)/n
plt.figure(figsize=(12,12))
plt.subplot(221)
plt.hist(data,bins=20,edgecolor='black',density=True);
X = np.linspace(min(data),max(data),50)
plt.plot(X,rv.pdf(X));
plt.subplot(222)
plt.plot(x, rv.cdf(x), lw=7,color='gray');
plt.plot(x,y,color='black');
plt.subplot(223)
plt.plot(y,rv.cdf(x));
plt.plot([0,1],[0,1],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('PP Plot')
plt.subplot(224)
plt.plot(x, rv.ppf(y))
mn = np.min(x)
mx = np.max(x)
plt.plot([mn,mx],[mn,mx],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('QQ Plot');
#print(mn,mx)
###Output
_____no_output_____
###Markdown
Con los dígitos seleccionados:
###Code
def distMah2(m,ic,v):
return (v-m) @ ic @ (v-m)
def dm(m,c):
ic = np.linalg.inv(c)
return lambda v: distMah2(m,ic,v)
d = dm(M,C)
data = [d(x) for x in tr]
df = len(M)
rv = chi2(df)
x = sorted(data)
n = len(x)
y = np.linspace(1/n,1,n)
y = np.arange(n)/n
plt.figure(figsize=(12,12))
plt.subplot(221)
plt.hist(data,bins=20,edgecolor='black',density=True);
X = np.linspace(min(data),max(data),50)
plt.plot(X,rv.pdf(X));
plt.subplot(222)
plt.plot(x, rv.cdf(x), lw=7,color='gray');
plt.plot(x,y,color='black');
plt.subplot(223)
plt.plot(y,rv.cdf(x));
plt.plot([0,1],[0,1],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('PP Plot')
plt.subplot(224)
plt.plot(x, rv.ppf(y))
mn = np.min(x)
mx = np.max(x)
plt.plot([mn,mx],[mn,mx],'gray',lw=5,alpha=0.3)
plt.axis('equal'); plt.title('QQ Plot');
#print(mn,mx)
###Output
_____no_output_____
###Markdown
No es exactamente normal. A pesar de ello, si las nubes no están muy solapadas el clasificador se comportará bien. Objetos extremos
###Code
raro=np.argmax(data)
shdig(selected[raro])
raros = sorted(range(len(selected)),key=lambda k:d(tr[k]))
plt.figure(figsize=(12,4))
plt.imshow(1-np.bmat([[ selected[raros[-k]].reshape(28,28) for k in range(1,11)]]),'gray',vmin=0,vmax=1);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Regularización Para conseguir **generalización** es necesario controlar la capacidad de la máquinas de aprendizaje. Vamos a ilustrar este principio con una máquina lineal. Seleccionamos dos clases y ponemos las salidas deseadas de la máquina a valores +1 y -1:
###Code
n = 100
ca = 4
cb = 9
# seleccionamos las posiciones de las clases que nos interesan
sel_l = (cl == ca) | (cl==cb)
sel_t = (ct == ca) | (ct==cb)
# extraemos esas posiciones
# x e y seleccionadas para aprendizaje
# usaré solo los n primeros para aprender
xsl = xl[sel_l][:n]
ysl = cl[sel_l].astype(int)[:n]
# y ponemos correctamente los valores deseados, positivo o negativo
ysl[ysl==ca] = 1
ysl[ysl==cb] = -1
# y lo mismo para el x e y seleccionadas para test (evaluación independiente)
xst = xt[sel_t]
yst = ct[sel_t].astype(int)
yst[yst==ca] = 1
yst[yst==cb] = -1
np.sum(sel_l)
def shdig(v):
x = np.reshape(v,[28,28])
plt.imshow(1-x, 'gray', vmin=0, vmax=1, interpolation="nearest");
k1,k2 = 55, 56
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xsl[k1])
plt.title(ysl[k1])
plt.subplot(1,2,2)
shdig(xsl[k2])
plt.title(ysl[k2]);
xsl.shape
yst
###Output
_____no_output_____
###Markdown
conveniente para añadir el término independiente (offset) a una máquina lineal
###Code
def homog(x):
r,c = x.shape
return np.hstack([x, np.ones([r,1])])
###Output
_____no_output_____
###Markdown
solución de mínimos cuadrados para un sistema lineal Deseo encontrar $W$ tal que `xsl @ w = ysel`O sea, resolver $X w= y$Usarmos `lstsq` del módulo de álgebra lineal `numpy.linalg`, que obtiene la solución de mínimo error cuadrático de un sistema (ver el notebook de [sistemas de ecuaciones](sistecs.ipynb)). `lstsq` no es lo ideal para mostrar este efecto en el caso no regularizado, porque para sistemas subdeterminados obtiene la solución de mínima norma, y por tanto, también regulariza.
###Code
W,_,_,_ = la.lstsq(homog(xsl),ysl)
#W
#homog(xsl) @ W
#np.sign(homog(xsl) @ W) == np.sign(ysl)
###Output
_____no_output_____
###Markdown
contamos los aciertos
###Code
np.sum(np.sign(homog(xsl) @ W) == np.sign(ysl)), len(ysl)
###Output
_____no_output_____
###Markdown
Tiene buena pinta, acierta todos los ejemplos de entrenamiento.
###Code
np.sign(homog(xst) @ W) == np.sign(yst)
np.sum(np.sign(homog(xst) @ W) == np.sign(yst)), len(yst)
k1,k2 = 55, 56
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xsl[k1])
plt.title((homog(xsl) @ W)[k1])
plt.subplot(1,2,2)
shdig(xsl[k2])
plt.title((homog(xsl) @ W)[k2]);
###Output
_____no_output_____
###Markdown
Obtiene exactamente los valores deseados $\pm 1$, ya que tiene más grados de libertad (coeficientes ajustables) que restricciones (ecuaciones, número de ejemplos de entrenamiento). Esto inspira poca confianza en el comportamiento con ejemplos desconocidos:
###Code
k1,k2 = 70, 55
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
shdig(xst[k1])
plt.title((homog(xst) @ W)[k1])
plt.subplot(1,2,2)
shdig(xst[k2])
plt.title((homog(xst) @ W)[k2]);
###Output
_____no_output_____
###Markdown
Vamos a construir una solución regularizada, que penaliza con un peso $\lambda$ el tamaño de los coeficientes, para que se reduzca la interpolación de detalles irrelevantes. La solución regularizada es muy parecida a la de mínimos cuadrados, pero hay que "inflar" la covarianza $X^TX$ con $\lambda$. En lugar de $w = (X^T X) ^{-1} X^T y$(esto es lo que hace internamente lstsq, es la "pseudoinversa" de X, por y)hacemos$w = (X^T X + \lambda I) ^{-1} X^T y$
###Code
lam = 2E2
D = np.diag(lam*np.ones([784+1]))
D[-1,-1] = 0
# el coeficiente b no se regulariza,
# porque la posición del hiperplano puede ser cualquiera, no hay que
# promover que se acerque al origen
#D
xh = homog(xsl)
Wr = la.solve(xh.T @ xh + D, xh.T @ ysl)
np.sum(np.sign(homog(xsl) @ Wr) == np.sign(ysl)), len(ysl)
np.sum(np.sign(homog(xst) @ Wr) == np.sign(yst)), len(yst)
###Output
_____no_output_____
###Markdown
**Ejercicio**: crea una curva comparando $E_L$ con $E_T$ para valores crecientes de $\lambda$.
###Code
Lam = [0.01, 0.1, 1, 5, 10, 50, 100, 200, 500, 1000, 2000, 3000, 5000]
def regu():
xh = homog(xsl)
L = []
T = []
for l in Lam:
lam = 2E2
D = np.diag(l*np.ones([784+1]))
D[-1,-1] = 0
Wr = la.solve(xh.T @ xh + D, xh.T @ ysl)
EL = np.sum(np.sign(homog(xsl) @ Wr) == np.sign(ysl)), len(ysl)
ET = np.sum(np.sign(homog(xst) @ Wr) == np.sign(yst)), len(yst)
L.append(EL[0]/EL[1])
T.append(ET[0]/ET[1])
return 1-np.array(L), 1-np.array(T)
plt.figure(figsize=(8,6))
l,t = regu()
plt.plot(100*l,'o-',label='training',color='red')
plt.plot(100*t,'o-',label='test',color='green')
plt.xticks(np.arange(12), Lam, rotation=45)
plt.legend()
plt.xlabel('$\lambda$'); plt.ylabel('error %')
plt.title('Regularization');
###Output
_____no_output_____
###Markdown
Esta gráfica ilustra el principio teórico fundamental de *machine learning*: la **generalización** está relacionada con la **capacidad** de la máquina. *Adversarial examples* Es posible sintetizar instancias aparentemente inocentes pero que confunden al clasificador. Gaussian classifier
###Code
from sklearn import decomposition, discriminant_analysis
def acc(maq,x,y):
return 100*(y == maq.predict(x)).sum() / len(y)
transformer = decomposition.PCA(n_components=40).fit(xl)
xrl = transformer.transform(xl)
xrt = transformer.transform(xt)
###Output
_____no_output_____
###Markdown
Un clasificador "naive Bayes" tiene más de un 12% de errores, mientras que el gaussiano completo consigue menos de 4%:
###Code
maq = discriminant_analysis.QuadraticDiscriminantAnalysis(store_covariance=True).fit(xrl,cl)
acc(maq,xrt,ct)
###Output
_____no_output_____
###Markdown
Adversarial examples
###Code
def mkg(transformer,maquina,cl,v):
d0 = transformer.transform([v])[0] - maquina.means_[cl]
d1 = np.linalg.inv(maquina.covariance_[cl]) @ d0
d2 = transformer.inverse_transform(d1)
return d2
cdesired = 5
k = 1234
v0 = xt[k]
v = v0
corig = ct[k]
shdig(v0); plt.title(corig);
redu = transformer.transform([v])
maq.predict_proba(redu)[0][[cdesired,corig]]
for _ in range(10):
g = mkg(transformer, maq, corig, v) - mkg(transformer, maq, cdesired, v)
v = np.clip(v + 0.01*g, 0, 1)
redu = transformer.transform([v])
cp = maq.predict(redu)[0]
if cp != corig: break
shdig(v)
plt.title(cp)
maq.predict_proba(redu)[0][[cdesired,corig]]
shdig(abs(v-v0))
print(np.sum(abs(v-v0)))
###Output
15.84989
###Markdown
Random inputs
###Code
v0 = np.random.rand(28,28).flatten()
shdig(v0)
v = v0
redu = transformer.transform([v])
plt.title(maq.predict(redu)[0]);
maq.predict_proba(redu)[0].max()
cdesired = 0
for _ in range(3):
g = - mkg(transformer, maq, cdesired, v)
v = np.clip(v + 0.01*g, 0, 1)
redu = transformer.transform([v])
cp = maq.predict(redu)[0]
shdig(v)
plt.title(cp)
maq.predict_proba(redu)[0][cdesired]
maq.predict_proba(redu)[0]
shdig(abs(v-v0))
print(np.sum(abs(v-v0)))
###Output
10.46400383834251
###Markdown
Otras máquinas de aprendizaje Naive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
maq = gnb.fit(xl, cl)
acc(maq,xt,ct)
maq.predict(digits)
maq.sigma_ = maq.sigma_ * 0 + 1
acc(maq,xt,ct)
maq.predict(digits)
###Output
_____no_output_____
###Markdown
Support vector machine (SVM)
###Code
from sklearn import svm
classifier = svm.SVC(gamma=0.01, C=0.1)
#classifier = svm.SVC(gamma=0.001)
classifier.kernel
maq = classifier.fit(xl[:5000], cl[:5000])
maq.support_vectors_.shape
acc(maq,xt,ct)
maq.predict(digits)
#import pickle
#s = pickle.dumps(maq)
#from sklearn.externals import joblib
#joblib.dump(maq, 'svm.pkl')
#maq = joblib.load('svm.pkl')
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
from sklearn import ensemble
clf = ensemble.GradientBoostingClassifier(subsample=0.1, n_estimators=50, max_features=50, min_samples_split=10)
clf.fit(xl, cl)
clf.score(xl,cl), clf.score(xt,ct)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
clf = ensemble.RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf.fit(xl, cl)
clf.score(xl,cl), clf.score(xt,ct)
###Output
_____no_output_____
###Markdown
CNN Red convolucional profunda (ver [deep learning](tensorflow.ipynb)).
###Code
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Dropout, Softmax, Flatten
model = Sequential()
model.add(Conv2D(input_shape=(28,28,1), filters=32, kernel_size=(5,5), strides=1,
padding='same', use_bias=True, activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(5,5), strides=1,
padding='same', use_bias=True, activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(1024))
model.add(Dropout(rate=0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
if False:
model.fit(xl.reshape(-1,28,28,1), yl, epochs=50, batch_size=500)
#model.save('digits.keras')
else:
#wget https://robot.inf.um.es/material/va/digits.keras
model.load_weights('../data/models/digits.keras')
model.evaluate(xt.reshape(-1,28,28,1),yt, batch_size=500)
plt.imshow(-np.hstack([x.reshape(28,28) for x in ok]),'gray'); plt.axis('off');
model.predict_classes(np.array(ok).reshape(-1,28,28,1))
###Output
_____no_output_____
###Markdown
Best machine learning applied to decades of newspaper articles Source: newsgac/ace/tasks.py explain_article_lime_task_impl() First load necessary Python libraries:
###Code
from newsgac import config
from newsgac import database
from newsgac.data_sources import DataSource
from newsgac.pipelines import Pipeline
###Output
WARNING:newsgac.config:Loading environment variables from ".env" file.
WARNING:newsgac.config:No secret key found, using default. THIS IS BAD IN PRODUCTION.
###Markdown
Data setNext: load the test data set:
###Code
[d.display_title for d in DataSource.objects.all()]
data_source = DataSource.objects[1]
print(data_source.articles[0].label,data_source.articles[0].raw_text)
articles = [article.raw_text for article in data_source.articles]
labels = [article.label for article in data_source.articles]
print(labels[1],articles[1])
###Output
3 Stichting Noordzee Mijnbouw opgericht Met het doel te bevorderen , dat werkzaamheden welke mogelijke concessionarissen willen laten verrichten in verband met exploratie , winning en het transport van olie en gas , in het bijzonder met betrekking tot het continentale plat van de Noordzee , zoveel mogelijk door Nederlandse ondernemingen zullen worden verricht , is opgericht de Stichting Noordzee Mijnbouw . Het initiatief tot de oprichting van deze stichting werd genomen door Scheepsbouwbelangen NV , een groepering van een achttal vooraanstaande Nederlandse scheepswerven , de Algemene Bank Nederland , de Amsterdam-Rotterdam Bank cn de Nationale Investeringsbank ( Herstelbank ) . Voorzitter van het bestuur van de Stichting is prof. dr. J. Zijlstra , oudminister van economische zaken en van financiën , directeur is de heer R. Ph. Keegstra . Aan de Stichting werd inmiddels uitbreiding gegeven door de deelneming van groeperingen van geïnteresseerde Nederlandse ondernemingen . Ter verkrijg ' van wetenschappelijke en technische steun van Nederlandse instellingen van wetenschap en onderzoek is de Niverheidsorganisatie TNO in de nersoon van prof. ir. L. Troost , ook in " het bestuur toegetreden . Na de oprichting , welke 22 september jl. heeft plaatsgehad , heeft het stichtingsbestuur zich na enige tijd van voorbereiding gepresenteerd bij de minister van economische zaken .
###Markdown
Pipeline definitionMail Kim Smeenk, 18-09-2019 11:39: Attached you can find my explanation for which pipeline I have chosen. It is: **3N 2019 9 SVM TFIDF quotes removed**
###Code
[p.display_title for p in Pipeline.objects.all()]
SELECTEDPIPELINE=1
p = Pipeline.objects[SELECTEDPIPELINE]
p.display_title,p
skp = p.sk_pipeline.get()
predictions = skp.predict(articles)
predictions
correct = 0
for i in range(0,len(predictions)):
if predictions[i] == labels[i]: correct += 1
print(correct/len(labels))
###Output
0.8830601092896175
###Markdown
Process bulk data
###Code
import csv
import gzip
import os
import re
COLUMNSEP = "\t"
ARTICLECOLUMNID = 4
DATECOLUMNID = 3
IDCOLUMNID = 0
DATE = "date"
ID = "id"
LABEL = "label"
DATADIR = "/home/erikt/projects/newsgac/data-large"
def makeFileName(dataDir,newspaper,year):
return(dataDir+"/"+newspaper+"/"+newspaper+"-"+str(year)+".txt.gz")
def readLinesFromFile(fileName):
lines = []
inFile = gzip.open(makeFileName(DATADIR,NEWSPAPER,YEAR),"rb")
for line in inFile: lines.append(line.decode("utf-8"))
inFile.close()
return(lines)
def getArticlesFromLines(lines):
return([line.split(COLUMNSEP)[ARTICLECOLUMNID] for line in lines])
def getDatesFromLines(lines):
return([re.sub("DATE=","",line.split(COLUMNSEP)[DATECOLUMNID]) for line in lines])
def getIdsFromLines(lines):
return([line.split(COLUMNSEP)[IDCOLUMNID] for line in lines])
def saveLabels(fileName,labels,dates,ids,genreNames):
with open(fileName+'.out.csv', 'w') as csvfile:
csvwriter = csv.DictWriter(csvfile, fieldnames=[ID,DATE,LABEL])
csvwriter.writeheader()
for i in range(0,len(labels)):
csvwriter.writerow({ID:ids[i],DATE:dates[i],LABEL:genreNames[labels[i]]})
csvfile.close()
NEWSPAPER = "volkskrant"
YEARSTART = 1955
YEAREND = 1956
os.chdir(DATADIR+"/"+NEWSPAPER)
genreNames = DataSource.objects[0].labels
for year in range(YEARSTART,YEAREND+1):
fileName = makeFileName(DATADIR,NEWSPAPER,year)
lines = readLinesFromFile(fileName)
articles = getArticlesFromLines(lines)
dates = getDatesFromLines(lines)
ids = getIdsFromLines(lines)
labels = skp.predict(articles)
saveLabels(fileName,labels,dates,ids,genreNames)
###Output
_____no_output_____
###Markdown
We assume that test text processing is performed automatically by the pipeline
###Code
p.data_source.display_title,p,skp
###Output
_____no_output_____ |
nb/2018_spring/Lecture8.ipynb | ###Markdown
CME 193 Introduction to Scientific Python Spring 2018 Lecture 8------------- Recursion, Exceptions, Unit Tests, Neural Networks --------- Lecture 8 Contents* Admin* Recursion* Exceptions* Unit Testing* Deep Learning* Quick intro to neural nets* Deep Learning Packages* Tensorflow Basics* Keras Basics* More packages Administration* Thank you for your project proposals! Really cool and interesting ideas.* Complete either HW2 or Project by **5/15**.* Exercises also due **5/15**. Project tips and general feedback- If your project involves a dataset, make sure you tackle this step early- HW2 is the benchmark for required deliverables.- If you need to pivot along the way, that is fine, if it's substantial let us know- Have fun and research best practices along the way. You are *not* being graded on how well your model works. Office HoursI will continue to hold office hours over the next two weeks:- 2:00-3:15 Mon/Wed in Huang or class time, you decide! --------- Recursion Recursive function solve problems by reducing them to smaller problems of the same form.This allows recursive functions to call themselves. - New paradigm - Powerful tool - Divide-and-conquer - Beautiful solutions First exampleLet’s consider a trivial problem:Suppose we want to add two positive numbers ```a``` and ```b```, but we can only add/subtract 1 to any number.How would you write a function to do this without recursion? What control statement(s) would you use?
###Code
# Non-recursive solution
def add(a, b):
while b > 0:
a += 1
b -= 1
return a
add(7, 8)
###Output
_____no_output_____
###Markdown
Recursive solution- Simple case: - If ```add(a,b)``` is called with ```b = 0``` just return ```a```- Otherwise, we can return ```1 + add(a, b-1)```
###Code
# Recursive solution
# Adding b to a, (if only able to use +1)
def add(a, b):
if b == 0:
# base case
return a
# recursive step
return add(a, b-1) + 1
###Output
_____no_output_____
###Markdown
Base case and recursive stepsRecursive functions consist of two parts:**Base case**: The base case is the trivial case that can be dealt with easily.**Recursive step**: The recursive step brings us slightly closer (breaks the problem into smaller subproblems) to the base case andcalls the function itself again. Reversing a listHow can we recursively reverse a list? ```([1, 2, 3] → [3, 2, 1])``` - If list is empty or has one element, the reverse is itself - Otherwise, reverse elements 2 to n, and append the first
###Code
def reverse_list(xs):
if len(xs) <= 1:
return xs
else:
# shift first element to last
return reverse_list(xs[1:]) + [xs[0]]
reverse_list([1,2,3])
###Output
_____no_output_____
###Markdown
Palindromes- A palindrome is a word that reads the same from both ways, such as radar or level.- Let’s write a function that checks whether a given word is a palindrome. The recursive ideaGiven a word, such as level, we check: - whether the first and last character are the same - whether the string with first and last character removed are the same Base caseWhat’s the base case in this case? - The empty string is a palindrome - Any 1 letter string is a palindrome
###Code
def is_palin(s):
'''returns True iff s is a palindrome'''
if len(s) <= 1:
return True
return s[0] == s[-1] and is_palin(s[1:-1])
print(is_palin('cme193'))
print(is_palin('racecar'))
###Output
False
True
###Markdown
Another exampleWrite a recursive function that computes $a^b$ for given a and b, where b is an integer. (Do not use ∗∗) Another exampleBase case: $b=0$ , $a^b =1$Recursive step: (be careful) there are actually two options, one for if b 0.
###Code
def power(a,b):
if b == 0:
return 1
elif b > 0:
return a*power(a,b-1)
else:
return (1./a)*power(a,b+1)
power(2,10)
power(2, -10) == 1.0/1024
###Output
_____no_output_____
###Markdown
Example: Fibonacci```pythonfib(0) = 0fib(1) = 1fib(n) = fib(n-1) + fib(n-2) for n >= 2```
###Code
def fib(n):
if n <= 1:
return n
f = fib(n-1) + fib(n-2)
return f
fib(34)
###Output
_____no_output_____
###Markdown
PitfallsRecursion can be very powerful, but there are some pitfalls: - Have to ensure you always reach the base case.- Each successive call of the algorithm must be solving a simpler problem- The number of function calls shouldn’t explode. (see exercises)- An iterative algorithm is always faster due to overhead of function calls. (However, the iterative solution might be much more complex) --------- Exceptions Exceptions ExampleConsider a function that takes a filename, and returns the 20 most common words. (This is similar to one of the exercises you could have done.) Suppose we have written a function:```pythontopkwords(filename, k)```Instead of entering ```filename``` and value of ```k``` in the script, we may also want to run it from the terminal. Parse input from command lineThe sys module allows us to read the terminal command that started the script:``` pythonimport sysprint(sys.argv)``` ```sys.argv``````sys.argv``` holds a list with command line arguments passed to a Python script.Note that ```sys.argv[0]``` will be the name of the python script itself. ```pythonimport sysdef topkwords(filename, k): Returns k most common words in filename passif __name__ == "__main__": filename = sys.argv[1] k = int(sys.argv[2]) print(topkwords(filename, k))``` Issues- What if the file does not exist?- What if the second argument is not an integer? - What if no command line arguments are supplied?- All result in errors: - ```IOError``` - ```ValueError``` - ```IndexError``` Exception handlingWhat do we want to happen when these errors occur? Should the program simply crash?No, we want it to gracefully handle these- ```IOError```: Tell the user the file does not exist.- ```ValueError```, ```IndexError```: Tell the user what the format of the command line arguments should be. Try ... Except- The try clause is executed- If no exception occurs, the except clause is skipped- If an exception occurs, the rest of the try clause is skipped. Then if the exception type is matched, the except clause is executed. Then the code continues after the try statement- If an exception occurs with no match in the except clause, execution is stopped and we get the standard error ```pythonimport sysif __name__ == "__main__": try: filename = sys.argv[1] k = int(sys.argv[2]) print topkwords(filename, k) except IOError: print("File does not exist") except (ValueError, IndexError): print("Error in command line input") print("Run as: python wc.py ") print("where is an integer")``` A naked except A naked exceptWe can have a naked except that catches any error:```pythontry: t = 3.0 / 0.0except: handles any error print('There was some error')```Use this with extreme caution though, as genuine bugs might be impossible to correct! Try - Except - Else- Else clause is executed only if there is no exception from the ``` try ``` block.Why? - Avoids catching exception that was not protected E.g. consider f.readlines raising an IOError - simplifies code readibility ```python from Python docsfor arg in sys.argv[1:]: try: f = open(arg, 'r') except IOError: print('cannot open', arg) else: print(arg, 'has', len(f.readlines()), 'lines' f.close())``` RaiseWe can use Raise to raise an exception ourselves.```>>> raise NameError(’Oops’)Traceback (most recent call last): File "", line 1, in ?NameError: Oops``` ```finally```The finally statement is always executed before leaving the try statement, whether or not an exception has occured.Useful in case we have to close files, closing network connections etc.
###Code
def div(x, y):
res = None
try:
res = x/y
except Exception as e:
print(e)
else:
print("we are error free")
finally:
print("Finally clause")
return res
print(div(3,2))
print('-'*50)
print(div(3,0))
###Output
we are error free
Finally clause
1.5
--------------------------------------------------
division by zero
Finally clause
None
###Markdown
Raising our own excecptionsRecall the Rational class we considered a few lectures ago.What if the denominator passed in to the constructor is zero? Raising our own excecptions```pythonclass Rational: def __init__(self, p, q=1): g = gcd(p, q) self.p = p / g self.q = q / g```What if ```q == 0```? Making the necessary change```pythonclass Rational: def __init__(self, p, q=1): if q == 0: raise ZeroDivisionError('denominator is zero') g = gcd(p, q) self.p = p / g self.q = q / g``` --------- Unit tests  Unit tests: Test individual pieces of code.For example, for factorial function, test```0!= 1``` or ```3! = 6``` etc.  Test driven developmentSome write tests before code. Reasons:- Focus on the requirements- Don’t write too much- Safely restructure/optimize code- When collaborating: don’t break other’s code - Faster Test casesHow to construct test cases?A test case should answer a single question about the code.A test case should:- Run by itself, no human input required- Determine on its own whether the test has passed or failed - Be separate from other tests What to test?- Known values- Sanity check (for conversion functions for example)- Bad input - Input is too large? - Negative input? - String input when expected an integer?- etc: very dependent on problem ```unittest```A testcase is created by subclassing ```unittest.TestCase```Individual tests are defined with methods whose names start with the letters test. (Allows the test runner to identify the tests)Each test usually calls an assert method to run the test - many assert options.A few different ways to run tests (see documentation). Easiest way is to run ```unittest.main()``` for example if the test script is the main program. ```assert```We can use a number of methods to check for failures:- assertEqual- assertNotEqual- assertTrue, assertFalse- assertIn- assertRaises - assertAlmostEqual - assertGreater, assertLessEqual- etc. (see Docs) ```pythonimport unittestfrom my_script import is_palindromeclass KnownInput(unittest.TestCase): knownValues = (('lego', False), ('radar', True)) def testKnownValues(self): for word, palin in self.knownValues: result = is_palindrome(word) self.assertEqual(result, palin)``` ```unittest```Note, to use the ```unittest``` package inside a Jupyter notebook instead of ```unittest.main()```, use:``` unittest.main(argv=['ignored', '-v'], exit=False)``` Alternatives- ```nose2```- ```Pytest```http://nose2.readthedocs.io/en/latest/differences.html Pytest```pip install pytest```- Easy testing- Automatically discovers tests- No need to remember all assert functions, keyword assert works for everything- Informative failure results PytestTest discovery: (basics)- Scans files starting with test_ or ending with _test.py- Run functions starting with test_ Example : primesCreate two files in a directory: ```primes.py``` – Implementation ```test_primes.py``` – Tests ```python primes.py (simplest solution that passes tests)def is_prime(x): for i in range(2, x): if x % i == 0: return False return True``` ```python test_primes.pyfrom primes import is_prime def test_is_three_prime(): assert is_prime(3)def test_is_four_prime(): assert not is_prime(4)``` Using ```pytest``` to execute test suite By default, it will run all files prefixed with test.Here we pass in the name of our test script:```pytest test_primes.py``` ```pythonfrom primes import is_prime def test_is_zero_prime(): assert not is_prime(0) def test_is_one_prime(): assert not is_prime(1) def test_is_two_prime(): assert is_prime(2)def test_is_three_prime(): assert is_prime(3)def test_is_four_prime(): assert not is_prime(4)``` Some more tests- Negative numbers - Non integers- Large prime- List of known primes - List of non-primes When all tests pass...- First make sure all tests pass- Then optimize code, making sure nothing breaksNow you can be confident that whatever algorithm you use, it still works as desired! Writing good tests- Utilize automation and code reuse- Know the type and scope - your module or somebody else’s?- A single test should focus on a single thing- Functional tests must be deterministic- Leave no trace - safe setup and clean up Let's take a look at some examples... _DEEP LEARNING_ - Who can tell me a difference between classical machine learning and deep learning? What is Machine Learning?- Deep learning is a subfield of machine learning- Most machine learning methods work well because of human-designed representations and input features- Classical Machine learning is pretty much optimization, i.e. find the best set of weights to optimize predictions for a given loss function- So what's deep learning? - Well, what does wikipedia say? > "Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms" So what does that mean?- Lets give an example - Say I wanted to determine whether an image is of a cat vs. a dog. - In classical machine learning, we would have to define features for the input data: - the weight of the animal - does it have whiskers - does it have ears and are they pointed - does it have ears and are they not pointed etc.- In short, we have to define a set of facial features and let the optimization identify which features are more important. What do we do in deep learning? * Neural Networks automatically learns which features are important for classification by applying a series of nonlinear processing units* When people say "Deep Learning" they mean using Deep Neural Network* Main differences: 1. ** Feature engineering **: Need to create custom feature space in classical machine learning. Not necessary in deep learning 2. ** Data dimensions **: When the data is small, Deep Learning algorithms don’t perform that well. Typically, neural network algorithms need a large amount of data to learn patterns. With traditional machine learning, features are handcrafted, and thus tend to exhibit superior performance when data is sparse. 3. ** Approach ** Neural Network Training has an end-to-end approach. In classical machine learning, typically you break the problem down into different parts, solve them individually, and combine them to get the result. In deep learning you train the model end to end (black box) 4. ** Training time **: Deep Neural Networks takes alot longer to train. 5. ** Hardward dependencies **: Deep learning usually takes advantage of GPUs to speed up training. 6. ** Interpretability **: Hard to interpret what's going on inside of a deep neural network. Difficult to interpret why a prediction was made. Millions of parameters. Why use deep learning- Manually designed features are often over-specified- Learned features are adaptable - Deep learning is very flexible - Deep learning can handle supervised and unsupervised tasks- Deep learning is has achieved ** superior ** performance in many tasks problems since 2010 (computer vision, NLP, classification tasks, game-playing) Why has it gotten better ?- Explosion in the amount of data- Way faster machines with the advent of more powerful CPUs and GPUs- New models, algorithms,ideas Neural Network Basics - Perceptron - Fully Connected Forward Neural Networks- Intuition: - Neural networks are a model of our brain. Each node, neuron, applies an operation on its inputs and passes its outputs to the next layer. These neurons can be connected into networks to fit more complicated patterns. Perceptron - Most basic artificial neuron. Developed in 50s and 60s by Frank Rosenblatt- So what is a perceptron. Well its simply a dot product operator + a threshold funtion - The inputs $x_1,x_2, x_3$ are multiplied by weights $w_1,w_2, w_3$ to determine their relative importance:> > $output = \begin{cases} 0 \mbox{ if } \sum_{j} w_j x_j \leq \mbox{thresh} \\ 1 \mbox{ if } \sum_{j} w_j x_j > \mbox{thresh} \end{cases} $- These weights are learned in training Activation Neuron- More generally, a neural network takes a vector $x$ and makes predictions by a composition of linear transformations and non-linear activation layers.- Each node computes:> $output = f(Wx + b)$- and passes the output to the next layer of the network- $f$ is some non-linear activation function, W is a matrix of weights, and b is a vector of biases. Sigmoid Neuron- A sigmoid neuron simply uses the sigmoid function as its activation function:> $output = \sigma(Wx + b)$ - Where $ \sigma(z) = \frac{1}{1 + e^{-z}}$ Relu activation functions- Relu: $f(x) = max(x, 0)$ >  Tanh activation functions- Tanh: $f(x) = \tanh(x)$ >  Fully connected neural networks- Stack layers of neurons together, using the outputs of the previous layer as inputs to the following layer. When there are many layers, the network is ** _deep_** Forward Propation:- given $h^0 = x$, we have > $h^{i +1}= f(W^{i} h^{i} + b^{i})$- where $h^i$ are the output of the $i$'th hidden layer> $\hat{y} = f(W^{n-1} h^{n-1} + b^{n-1})$ Backpropagation Basics- Given a training set $\{(x^{(1)}, y^{(1)}), ... , (x^{(m)}, y^{(m)})\}$ of m training examples. - We need to define a loss $L = J(W, b; x, y) = \sum_i (\hat{y}_i - y_i)^2$ < mean squared loss- Backpropagation is just gradient descent on the weights> $W_{t+1} = W_t - \eta \frac{\partial L}{\partial W}\big|_{W_t}$ Backpropagation Continued- Intuition: flow the gradients with respect to the loss backward through the network to update the weights- More references on backprop: - https://ayearofai.com/rohan-lenny-1-neural-networks-the-backpropagation-algorithm-explained-abf4609d4f9d - http://cs231n.github.io/optimization-2/ - http://web.stanford.edu/class/cs224n/lecture_notes/cs224n-2017-gradient-notes.pdf - It's beautifully local, every node in the network can right away compute two things: - Its output value - its local gradient of the inputs with respect to its output value - Backpropagation is just repeated chain rule through the network Deep Learning Libraries Tensorflow- ``` pip install tensorflow ```- Open source, backend software developed by Google Brain before being released under the Apache 2.0 open source license- Advantages: - Good Community - Very fledible. You just define your computation as a data flow graph. Can define it however you like. - Portable: can run on GPUs - Creates a Static Computational Graph for fast backpropagation - Negatives: - It's big and complicated - Lots going on, easy for beginners to feel overwhelmed - It creates a static computational graph, so it is at times unflexible Tensorflow Basics- Two steps: - Building the computational graph. - Running the computational graph.- computational graph is just a series of tensorflow operations.
###Code
import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0)
print(node1, node2) # Doesn't actually run the graph just creates it
sess = tf.Session()
node3 = tf.add(node1, node2)
print(sess.run([node1, node2, node3]))
###Output
[3.0, 4.0, 7.0]
###Markdown
Placeholders- Well that wasn't very interesting, this only produces constant result.- Don't we need some way to specify inputs? - Yes! We can parameterize the graph to accept inputs using placeholders (external inputs)- To declare a placeholder, use``` pythontf.placeholder(dtype, shape, name) ```
###Code
x = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
sub_nodes = x - x2
print(sess.run(sub_nodes, {x: 5, x2: 2}))
print(sess.run(sub_nodes, {x: [5, 2,3], x2: [2,1,1]}))
###Output
3.0
[3. 1. 2.]
###Markdown
Variables - The whole point of deep learning was to learn the weights, so how do we make those?- We need to tell Tensorflow that these variables correspond to the weights we need to learn - To do this use ``` tf.Variable(initial_val , dtype) ```- constants as we have seen above are initialized on creation- variables have to be initialized at run time by running ``` tf.global_variables_initializer() ```
###Code
W = tf.Variable([.5], dtype = tf.float32) # define our variables
b = tf.Variable([-.5], dtype = tf.float32)
x = tf.placeholder(tf.float32) # define our inputs
linear_predictor = W*x + b # define our model
init = tf.global_variables_initializer() #initialize variables
sess.run(init) ## initialize variables '
feed_dict = {x: [1,2,3,4,5,6]} # specify inputs
print(sess.run(linear_predictor, feed_dict))
###Output
[0. 0.5 1. 1.5 2. 2.5]
###Markdown
Define a loss and train - So we've got a simple model $y = Wx + b$, and now we want to learn the correct weights- To do this we need to define a loss function and an optimizer - Tensorflow provides a large number of [loss functions](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn/losses)- They also provide a large number of [optimizers](https://www.tensorflow.org/api_guides/python/train)- To train a model you specify the computational graph, define the loss function, set a optimizer, initialize your variables, and run
###Code
import tensorflow as tf
## CREATE YOUR MODEL
W = tf.Variable([.5], dtype = tf.float32) # define our variables
b = tf.Variable([-.5], dtype = tf.float32)
# define our inputs
x = tf.placeholder(tf.float32)
# define our model
linear_predictor = W*x + b
# define placeholder for y variables
y = tf.placeholder(tf.float32)
# CREATE YOUR LOSS
loss = tf.reduce_sum(tf.square(linear_predictor - y))
# SPECIFY YOUR OPTIMIZER
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
import tensorflow as tf
## CREATE YOUR MODEL
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# define our inputs
x = tf.placeholder(tf.float32)
# define our model
linear_model = W * x + b
# define placeholder for y variables
y = tf.placeholder(tf.float32)
# CREATE YOUR LOSS
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# SPECIFY YOUR OPTIMIZER
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
import time
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
time.sleep(.001)
_, loss_val = sess.run([train, loss], {x: x_train, y: y_train})
print("Loss is: {}".format(loss_val), end = '\r')
# evaluate training accuracy
print()
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
###Output
Loss is: 7.687006586820644e-110
W: [-0.9999964] b: [0.9999894] loss: 7.598544e-11
###Markdown
Tensorflow is very powerful- Please read the [documentation](https://www.tensorflow.org/get_started/) for more examples Keras- ``` pip install keras ```- Advantages: - Way easier to use - It is more of a front-end library, unlike Tensorflow which is a back-end library. - Capable of running on top of other Machine and Deep Learning libraries like Tensorflow, CNTK or Theano.- Disadvantages: - Relatively opaque implementation - Harder to create your own new networks - Less control- Let's make a simple binary classfier using one hidden layer
###Code
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
# define our training/testing set
x_train = np.random.random((1000, 10))
y_train = np.random.randint(2, size= (1000,1))
x_test = np.random.random((200, 10))
y_test = np.random.randint(2, size=(200,1))
model = Sequential()
# Add a dense fully conected feed forward layer with 32 hidden units
model.add(Dense(32, input_dim = 10, activation = 'relu'))
# hidden layer of size 32
model.add(Dense(32, activation = 'relu'))
# output unit
model.add(Dense(1, activation = 'sigmoid'))
# compile the model specifying the loss
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# fit the model
model.fit(x_train, y_train, epochs=20,batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
###Output
Epoch 1/20
1000/1000 [==============================] - 0s - loss: 0.6959 - acc: 0.4840
Epoch 2/20
1000/1000 [==============================] - 0s - loss: 0.6905 - acc: 0.5190
Epoch 3/20
1000/1000 [==============================] - 0s - loss: 0.6886 - acc: 0.5380
Epoch 4/20
1000/1000 [==============================] - 0s - loss: 0.6877 - acc: 0.5460
Epoch 5/20
1000/1000 [==============================] - 0s - loss: 0.6868 - acc: 0.5460
Epoch 6/20
1000/1000 [==============================] - 0s - loss: 0.6860 - acc: 0.5430
Epoch 7/20
1000/1000 [==============================] - 0s - loss: 0.6854 - acc: 0.5450
Epoch 8/20
1000/1000 [==============================] - 0s - loss: 0.6849 - acc: 0.5410
Epoch 9/20
1000/1000 [==============================] - 0s - loss: 0.6839 - acc: 0.5510
Epoch 10/20
1000/1000 [==============================] - 0s - loss: 0.6838 - acc: 0.5470
Epoch 11/20
1000/1000 [==============================] - 0s - loss: 0.6830 - acc: 0.5490
Epoch 12/20
1000/1000 [==============================] - 0s - loss: 0.6824 - acc: 0.5530
Epoch 13/20
1000/1000 [==============================] - 0s - loss: 0.6821 - acc: 0.5630
Epoch 14/20
1000/1000 [==============================] - 0s - loss: 0.6814 - acc: 0.5510
Epoch 15/20
1000/1000 [==============================] - 0s - loss: 0.6809 - acc: 0.5710
Epoch 16/20
1000/1000 [==============================] - 0s - loss: 0.6807 - acc: 0.5790
Epoch 17/20
1000/1000 [==============================] - 0s - loss: 0.6800 - acc: 0.5820
Epoch 18/20
1000/1000 [==============================] - 0s - loss: 0.6795 - acc: 0.5790
Epoch 19/20
1000/1000 [==============================] - 0s - loss: 0.6790 - acc: 0.5750
Epoch 20/20
1000/1000 [==============================] - 0s - loss: 0.6788 - acc: 0.5790
128/200 [==================>...........] - ETA: 0s
###Markdown
Summary Keras :- is a fast, flexible protyping tool- Can be used on top of tensorflow- Not apt for large scale research- Good to test out potential ideas on a dataset Tensorflow:- is the standard in deep learning research- very flexible, gpu support, automatic differentiation- statically defined: must declare a computational graph and run it- Nice tensorboard visualization module- Most Deep Learning classes at Stanford use Tensorflow PyTorch- Developed in part by Facebook, Stanford, Nvidia...- Similarly flexible to Tensorflow - Dynamic computational graph (each graph is computed on the fly)- This leads to a more pythonic API- I love it Tensors in Pytorch- Conceptually identical to numpy array- Generic tool for scientific computing, no knowledge of deep learning or computational graphs. - They can utilize GPUs to speed up their computation. Variables in Pytorch- autograd package allows for automatic differentiation of variables. - The forward pass through the network defines the computational graph (nodes are tensors, edges are functions)- PyTorch autograd looks a lot like TensorFlow: - in both frameworks we define a computational graph, and use automatic differentiation to compute gradients. - difference between the two is that TensorFlow's computational graphs are static and PyTorch uses dynamic computational graphs. - In pytorch each forward pass defines a new computational graph. - To create a Variable, wrap Tensors in ```Variable``` objects. This variable then represents a node in the computational graph.
###Code
import torch
from torch.autograd import Variable
dtype = torch.FloatTensor
N, D_in, H, D_out = 64, 500, 100, 10
# Setting requires_grad=False indicates that we do not need to compute gradients w.r.t var
# during the backward pass.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad = False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad = False)
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Variables during the backward pass.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(10000):
# Forward pass: compute predicted y using operations on Variables;
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
loss = (y_pred - y).pow(2).sum()
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
loss.backward()
# Update weights using gradient descent; w1.data and w2.data are Tensors,
# w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after running the backward pass
w1.grad.data.zero_()
w2.grad.data.zero_()
print("Loss is: {}".format(loss.data.numpy()), end = '\r')
print()
print("Final loss is {}".format(loss.data[0]))
###Output
Loss is: [4.962239e-07]]]
Final loss is 4.962238904226979e-07
###Markdown
That's still fairly cumbersome- When building neural networks, arrange the computation into layers, some of which have learnable parameters which will be optimized during learning.- Use the ``` torch.nn ``` package to define your layers- Create custom networks by subclassing the nn.Module- Really clean code!- Just create a class subclassing the nn.Module - specify layers in the ```__init__``` - define a forward pass by ```forward(self,x)``` method
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
class TwoLayerNet(nn.Module):
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
self.layer1 = nn.Linear(D_in, H)
self.layer2 = nn.Linear(H, D_out)
def forward(self, x):
out = F.relu(self.layer1(x))
out = self.layer2(out)
return out
# N is batch size; D_in is input dimension; H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Construct our model by instantiating the class defined above
model = TwoLayerNet(D_in, H, D_out)
# Construct our loss function and an Optimizer.
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(1000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Final Loss is {}".format(loss.data[0]))
###Output
Final Loss is 5.559909199703839e-10
|
_feature_engineering/.ipynb_checkpoints/Feature_Engineering_XGB_RandomForrest-checkpoint.ipynb | ###Markdown
Feature Selection & Importance For Forecasting 1 to 30 Days Out
###Code
import sys, time, datetime
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pylab as pl
import seaborn as sns
from tqdm import tqdm
from time import sleep
from sklearn import metrics, linear_model
from xgboost import XGBRegressor, plot_importance, plot_tree
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split, TimeSeriesSplit
from sklearn.metrics import mean_squared_error
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import Ridge, LinearRegression
from sklearn.ensemble import RandomForestClassifier, AdaBoostRegressor
from sklearn.preprocessing import MinMaxScaler
from sklearn import preprocessing
from sklearn.pipeline import make_pipeline
from statsmodels.tsa.stattools import grangercausalitytests, adfuller
import ppscore as pps
import warnings
warnings.filterwarnings('ignore')
#..........................................................................
# Main Inputs
#..........................................................................
DEBUG = False
target_column = 'LUACTRUU_Index_OAS'
# Forecast timeperiod
max_forecast = 30
days_ahead = list(range(1,max_forecast+1))
# Input file date ranges
date_start = datetime.date(2012, 8, 1)
date_end = datetime.date(2020, 7, 30)
# Default Models
default_CART = DecisionTreeRegressor(random_state=1)
default_XGB = XGBRegressor(n_estimators=1000,random_state=1)
scaler = MinMaxScaler(feature_range=(0,1))
# Read File
file_buffer = Path(__file__).parent / "../data/Economic_Data_2020_08_01.xlsx"
#..........................................................................
# Output Methods
#..........................................................................
def predictive_power(dh=None, y_value=None):
for target, feats in data_dict.items():
if dh == None:
if y_value == None:
y_value = "{0}_{1}D_Forecast".format(target_column, target)
predictors_df = pps.predictors(feats, y=target_column)
predictors_df = predictors_df[predictors_df['ppscore'] > 0.5]
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Predicative Power for: {0}".format(y_value))
sns.barplot(data=predictors_df, y="x", x="ppscore",palette="rocket")
else:
if target == dh:
if y_value == None:
y_value = "{0}_{1}D_Forecast".format(target_column, dh)
predictors_df = pps.predictors(feats, y=target_column)
predictors_df = predictors_df[predictors_df['ppscore'] > 0.5]
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Predicative Power for: {0}".format(y_value))
sns.barplot(data=predictors_df, y="x", x="ppscore",palette="rocket")
def feature_importance_CART(dh=None):
for target, feats in feature_CART.items():
width = 1
keys = feats.keys()
values = feats.values()
if dh == None:
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Feature Importance for {0} Day Forecast: {1}".format(target, target_column))
sns.barplot(y=list(keys), x=list(values), palette="rocket")
else:
if target == dh:
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Feature Importance for {0} Day Forecast: {1}".format(target, target_column))
sns.barplot(y=list(keys), x=list(values), palette="rocket")
def feature_importance_XGBOOST(dh=None):
for target, feats in feature_XGBOOST.items():
width = 1
keys = feats.keys()
values = feats.values()
if dh == None:
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Feature Importance for {0} Day Forecast: {1}".format(target, target_column))
sns.barplot(y=list(keys), x=list(values), palette="rocket")
else:
if target == dh:
f, ax = plt.subplots(figsize=(16, 5))
ax.set_title("Feature Importance for {0} Day Forecast: {1}".format(target, target_column))
sns.barplot(y=list(keys), x=list(values), palette="rocket")
def feature_imp_over_time_CART():
df = pd.DataFrame (features_over_time_CART, columns = features_over_time_CART.keys())
column_names = list(df.columns)
df["day"] = days_ahead
remove_list = []
for feat in column_names:
usefulness = df[feat].max()
if usefulness < 0.2:
if(DEBUG):print("feat: {0}, usseful-max: {1}".format(feat, usefulness))
df.drop([feat], axis=1)
if(DEBUG): print("...removing {0}".format(feat))
remove_list.append(feat)
for x in remove_list:
column_names.remove(x)
sns.set_palette(sns.color_palette("rocket"))
f, ax = plt.subplots(figsize=(14, 6))
for feat in column_names:
sns.lineplot(data=df,
x='day',
y=df[feat],
dashes=False).set_title('{0} Feature Importance By Time'.format(target_column))
sns.set_style("whitegrid")
ax.grid(True)
ax.set(xlabel='Days Out', ylabel='Predictive Importance')
ax.set(xticks=days_ahead)
ax.legend(column_names)
def feature_imp_over_time_XGB():
df = pd.DataFrame (features_over_time_XGB, columns = features_over_time_XGB.keys())
column_names = list(df.columns)
df["day"] = days_ahead
remove_list = []
for feat in column_names:
usefulness = df[feat].max()
if usefulness < 0.2:
if(DEBUG):print("feat: {0}, usseful-max: {1}".format(feat, usefulness))
df.drop([feat], axis=1)
if(DEBUG): print("...removing {0}".format(feat))
remove_list.append(feat)
for x in remove_list:
column_names.remove(x)
sns.set_palette(sns.color_palette("rocket"))
f, ax = plt.subplots(figsize=(14, 6))
for feat in column_names:
sns.lineplot(data=df,
x='day',
y=df[feat],
dashes=False).set_title('{0} Feature Importance By Time'.format(target_column))
sns.set_style("whitegrid")
ax.grid(True)
ax.set(xlabel='Days Out', ylabel='Predictive Importance')
ax.set(xticks=days_ahead)
ax.legend(column_names)
# Clean-up
data_dict = {}
model_dict = {}
feature_CART = {}
feature_XGBOOST = {}
features_over_time_CART = None
features_over_time_XGB = None
# Set time period for analysis
session_state = pd.read_excel(file_buffer)
session_state['Dates'] = pd.to_datetime(session_state['Dates']).dt.date
session_state= session_state[(session_state['Dates'] >= date_start) &
(session_state['Dates'] <= date_end)]
csv_data = session_state.copy()
session_state = None
print("Ready!")
#..........................................................................
# Pre-Processing
#..........................................................................
if(DEBUG): print("Preprocessing data...\n")
csv_data['EARN_DOWN'] = csv_data['EARN_DOWN'].astype(np.float16)
csv_data['EARN_UP'] = csv_data['EARN_DOWN'].astype(np.float16)
#CDX Index Technicals
csv_data['CDX_HY_momentum_10_30'] = \
csv_data['CDX_HY'] .rolling(window=10).mean() - \
csv_data['CDX_HY'] .rolling(window=30).mean() / \
csv_data['CDX_HY'] .rolling(window=30).mean()
csv_data['CDX_HY_momentum_30D_MA'] = csv_data['CDX_HY'].rolling(window=20).mean()
csv_data['CDX_HY_30D_STD'] = csv_data['CDX_HY'].rolling(window=20).std()
csv_data['CDX_HY_upper_band'] = \
csv_data['CDX_HY_momentum_30D_MA'] + (csv_data['CDX_HY_30D_STD'] * 2)
csv_data['CDX_HY_lower_band'] = \
csv_data['CDX_HY_momentum_30D_MA'] - (csv_data['CDX_HY_30D_STD'] * 2)
csv_data['CDX_IG_momentum_10_30'] = \
csv_data['CDX_IG'] .rolling(window=10).mean() - \
csv_data['CDX_IG'] .rolling(window=30).mean() / \
csv_data['CDX_IG'] .rolling(window=30).mean()
csv_data['CDX_IG_momentum_30D_MA'] = csv_data['CDX_IG'].rolling(window=20).mean()
csv_data['CDX_IG_30D_STD'] = csv_data['CDX_IG'].rolling(window=20).std()
csv_data['CDX_IG_upper_band'] = \
csv_data['CDX_IG_momentum_30D_MA'] + (csv_data['CDX_IG_30D_STD'] * 2)
csv_data['CDX_IG_lower_band'] = \
csv_data['CDX_IG_momentum_30D_MA'] - (csv_data['CDX_IG_30D_STD'] * 2)
# VIX Technicals
csv_data['VIX_INDEX_5_15'] = \
csv_data['VIX_INDEX'] .rolling(window=5).mean() - \
csv_data['VIX_INDEX'] .rolling(window=15).mean() / \
csv_data['VIX_INDEX'] .rolling(window=15).mean()
csv_data['VIX_INDEX_10_30'] = \
csv_data['VIX_INDEX'] .rolling(window=10).mean() - \
csv_data['VIX_INDEX'] .rolling(window=30).mean() / \
csv_data['VIX_INDEX'] .rolling(window=30).mean()
csv_data['VIX_INDEX_10_90'] = \
csv_data['VIX_INDEX'] .rolling(window=10).mean() - \
csv_data['VIX_INDEX'] .rolling(window=90).mean() / \
csv_data['VIX_INDEX'] .rolling(window=90).mean()
csv_data['VIX_INDEX_30_90'] = \
csv_data['VIX_INDEX'] .rolling(window=30).mean() - \
csv_data['VIX_INDEX'] .rolling(window=90).mean() / \
csv_data['VIX_INDEX'] .rolling(window=90).mean()
csv_data['VIX_30D_MA'] = csv_data['VIX_INDEX'].rolling(window=20).mean()
csv_data['VIX_30D_STD'] = csv_data['VIX_INDEX'].rolling(window=20).std()
csv_data['VIX_upper_band'] = \
csv_data['VIX_30D_MA'] + (csv_data['VIX_30D_STD'] * 2)
csv_data['VIX_lower_band'] = \
csv_data['VIX_30D_MA'] - (csv_data['VIX_30D_STD'] * 2)
#IG Index Technicals
csv_data['INDEX_IG_momentum_5_15'] = \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=5).mean() - \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=15).mean() / \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=15).mean()
csv_data['INDEX_IG_momentum_10_30'] = \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=10).mean() - \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=30).mean() / \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=30).mean()
csv_data['INDEX_IG_momentum_10_90'] = \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=10).mean() - \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=90).mean() / \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=90).mean()
csv_data['INDEX_IG_momentum_30_90'] = \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=30).mean() - \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=90).mean() / \
csv_data['LUACTRUU_Index_OAS'] .rolling(window=90).mean()
csv_data['INDEX_IG_30D_MA'] = csv_data['VIX_INDEX'].rolling(window=20).mean()
csv_data['INDEX_IG_30D_STD'] = csv_data['VIX_INDEX'].rolling(window=20).std()
csv_data['INDEX_IG_upper_band'] = \
csv_data['INDEX_IG_30D_MA'] + (csv_data['INDEX_IG_30D_STD'] * 2)
csv_data['INDEX_IG_lower_band'] = \
csv_data['INDEX_IG_30D_MA'] - (csv_data['INDEX_IG_30D_STD'] * 2)
#HY Index Technicals
csv_data['INDEX_HY_momentum_5_15'] = \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=5).mean() - \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=15).mean() / \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=15).mean()
csv_data['INDEX_HY_momentum_10_30'] = \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=10).mean() - \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=30).mean() / \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=30).mean()
csv_data['INDEX_HY_momentum_10_90'] = \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=10).mean() - \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=90).mean() / \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=90).mean()
csv_data['INDEX_HY_momentum_30_90'] = \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=30).mean() - \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=90).mean() / \
csv_data['LF98TRUU_Index_OAS'] .rolling(window=90).mean()
csv_data['INDEX_HY_30D_MA'] = csv_data['VIX_INDEX'].rolling(window=20).mean()
csv_data['INDEX_HY_30D_STD'] = csv_data['VIX_INDEX'].rolling(window=20).std()
csv_data['INDEX_HY_upper_band'] = \
csv_data['INDEX_HY_30D_MA'] + (csv_data['INDEX_HY_30D_STD'] * 2)
csv_data['INDEX_HY_lower_band'] = \
csv_data['INDEX_HY_30D_MA'] - (csv_data['INDEX_HY_30D_STD'] * 2)
print("Finished Adding New Features")
#..........................................................................
# Customize Dataset For Each Forecasting Period
#..........................................................................
# For Each look ahead period
for dh in tqdm(days_ahead):
sleep(0.1)
complete_data = csv_data.copy()
# Add predicative column for days ahead (dh)
forecast_name = '{0}_{0}D_Forecast'.format(target_column, dh)
if(DEBUG): print("Adding {0} ".format(forecast_name))
complete_data[forecast_name] = complete_data[target_column].shift(dh)
# Hold orginal data set
complete_data = complete_data.dropna()
Y_target = complete_data[forecast_name]
# Remove Target data from features
X = complete_data.copy()
X = X.drop([forecast_name, 'Dates'], axis=1)
# Records column names
X_feature_cols = X.columns
if features_over_time_CART is None:
features_over_time_CART = { feat : None for feat in X_feature_cols }
if features_over_time_XGB is None:
features_over_time_XGB = { feat : None for feat in X_feature_cols }
# Scale and add back to df with column names
X_scaled = scaler.fit_transform(X)
X_scaled = pd.DataFrame(X_scaled, columns=X_feature_cols)
data_dict[dh] = complete_data.copy()
#..........................................................................
# Build & Fit Models For Feature Selection
#..........................................................................
# Fit the models
model_CART = default_CART
model_XGB = default_XGB
if(DEBUG): print("Fitting CART: {0}...".format(forecast_name))
model_CART.fit(X_scaled, Y_target)
importances = model_CART.feature_importances_
feats = {}
for feature, importance in zip(X_feature_cols, importances):
if importance > 0.05:
feats[feature] = importance
if features_over_time_CART[feature] == None:
features_over_time_CART[feature] = [importance]
else:
features_over_time_CART[feature].append(importance)
feats = sorted(feats.items(), key=lambda x: x[1], reverse=True)
feats = dict(feats)
feature_CART[dh] = feats
if(DEBUG): print("Fitting XGBOOST: {0}...".format(forecast_name))
model_XGB.fit(X_scaled, Y_target)
importances = model_XGB.feature_importances_
feats = {}
for feature, importance in zip(X_feature_cols, importances):
if importance > 0.05:
feats[feature] = importance
if features_over_time_XGB[feature] == None:
features_over_time_XGB[feature] = [importance]
else:
features_over_time_XGB[feature].append(importance)
feats = sorted(feats.items(), key=lambda x: x[1], reverse=True)
feats = dict(feats)
feature_XGBOOST[dh] = feats
model_dict[forecast_name] = model_XGB
print('Done!')
feature_importance_CART(30)
feature_importance_XGBOOST(30)
predictive_power(30)
feature_imp_over_time_CART()
feature_imp_over_time_XGB()
###Output
_____no_output_____ |
013_mixed_features_rfecv.ipynb | ###Markdown
Feature Selection with Column TransformerCategorical and numerical variables need to be treated differently some times. We can loop back to the pipeline and column transformer stuff to incorporate the newer feature selection.
###Code
df = pd.read_csv("data/mtcars.csv")
df.head()
###Output
_____no_output_____
###Markdown
Feature TypesFor this, I am going to treat of cylinders, vs, am, of gears, and carb as categorical. This is a reasonable, but not necissarily correct, interpretation of the scenario - a domain knowledge decision.The data doesn't matter much here, and the example is tiny, but the feature selection stuff transfers as is to other datasets.
###Code
# Manually set categories as categories
df["cyl"] = df["cyl"].astype("category")
df["vs"] = df["vs"].astype("category")
df["am"] = df["am"].astype("category")
df["gear"] = df["gear"].astype("category")
df["carb"] = df["carb"].astype("category")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 32 entries, 0 to 31
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 model 32 non-null object
1 mpg 32 non-null float64
2 cyl 32 non-null category
3 disp 32 non-null float64
4 hp 32 non-null int64
5 drat 32 non-null float64
6 wt 32 non-null float64
7 qsec 32 non-null float64
8 vs 32 non-null category
9 am 32 non-null category
10 gear 32 non-null category
11 carb 32 non-null category
dtypes: category(5), float64(5), int64(1), object(1)
memory usage: 2.7+ KB
###Markdown
Setup PipelineWe have two pipelines that are then combined in our column transformer. The numerical one includes some rfecv to do feature selection on those variables. The categorical one includes k-best to do feature selection there. Each subset is feature selected, then the two subsets are combined.
###Code
#Data Split
cat_feat = ["cyl", "vs", "am", "gear", "carb"]
num_feat = ["disp", "hp", "drat", "wt", "qsec"]
y = df["mpg"]
X = df.drop(columns={"mpg", "model"})
X_train, X_test, y_train, y_test = train_test_split(X, y)
#estimators
model = LinearRegression()
selector = Lasso()
# RFECV on Numerical Data
min_features_to_select = 1 # Minimum number of features to consider
rfecv = RFECV(
estimator=selector,
step=1,
cv=3,
min_features_to_select=min_features_to_select,
)
num_pipe = Pipeline([
("rfecc", rfecv)
])
# KBest on Categorical Data
kbest = SelectKBest(k=2)
cat_pipe = Pipeline([
("kbest",kbest)
])
#Pre-processing and Column Transformer
prepro = ColumnTransformer([
("cat", cat_pipe, cat_feat),
("num", num_pipe, num_feat)
])
pipe = Pipeline([("prepro", prepro),
("model", model)
])
pipe.fit(X_train, y_train)
print("Score:", pipe.score(X_test, y_test))
#print(pipe.get_params("steps"))
###Output
Score: 0.7810072770093061
{'memory': None, 'steps': [('prepro', ColumnTransformer(transformers=[('cat',
Pipeline(steps=[('kbest', SelectKBest(k=2))]),
['cyl', 'vs', 'am', 'gear', 'carb']),
('num',
Pipeline(steps=[('rfecc',
RFECV(cv=3,
estimator=Lasso()))]),
['disp', 'hp', 'drat', 'wt', 'qsec'])])), ('model', LinearRegression())], 'verbose': False, 'prepro': ColumnTransformer(transformers=[('cat',
Pipeline(steps=[('kbest', SelectKBest(k=2))]),
['cyl', 'vs', 'am', 'gear', 'carb']),
('num',
Pipeline(steps=[('rfecc',
RFECV(cv=3,
estimator=Lasso()))]),
['disp', 'hp', 'drat', 'wt', 'qsec'])]), 'model': LinearRegression(), 'prepro__n_jobs': None, 'prepro__remainder': 'drop', 'prepro__sparse_threshold': 0.3, 'prepro__transformer_weights': None, 'prepro__transformers': [('cat', Pipeline(steps=[('kbest', SelectKBest(k=2))]), ['cyl', 'vs', 'am', 'gear', 'carb']), ('num', Pipeline(steps=[('rfecc', RFECV(cv=3, estimator=Lasso()))]), ['disp', 'hp', 'drat', 'wt', 'qsec'])], 'prepro__verbose': False, 'prepro__cat': Pipeline(steps=[('kbest', SelectKBest(k=2))]), 'prepro__num': Pipeline(steps=[('rfecc', RFECV(cv=3, estimator=Lasso()))]), 'prepro__cat__memory': None, 'prepro__cat__steps': [('kbest', SelectKBest(k=2))], 'prepro__cat__verbose': False, 'prepro__cat__kbest': SelectKBest(k=2), 'prepro__cat__kbest__k': 2, 'prepro__cat__kbest__score_func': <function f_classif at 0x0000021910694700>, 'prepro__num__memory': None, 'prepro__num__steps': [('rfecc', RFECV(cv=3, estimator=Lasso()))], 'prepro__num__verbose': False, 'prepro__num__rfecc': RFECV(cv=3, estimator=Lasso()), 'prepro__num__rfecc__cv': 3, 'prepro__num__rfecc__estimator__alpha': 1.0, 'prepro__num__rfecc__estimator__copy_X': True, 'prepro__num__rfecc__estimator__fit_intercept': True, 'prepro__num__rfecc__estimator__max_iter': 1000, 'prepro__num__rfecc__estimator__normalize': False, 'prepro__num__rfecc__estimator__positive': False, 'prepro__num__rfecc__estimator__precompute': False, 'prepro__num__rfecc__estimator__random_state': None, 'prepro__num__rfecc__estimator__selection': 'cyclic', 'prepro__num__rfecc__estimator__tol': 0.0001, 'prepro__num__rfecc__estimator__warm_start': False, 'prepro__num__rfecc__estimator': Lasso(), 'prepro__num__rfecc__importance_getter': 'auto', 'prepro__num__rfecc__min_features_to_select': 1, 'prepro__num__rfecc__n_jobs': None, 'prepro__num__rfecc__scoring': None, 'prepro__num__rfecc__step': 1, 'prepro__num__rfecc__verbose': 0, 'model__copy_X': True, 'model__fit_intercept': True, 'model__n_jobs': None, 'model__normalize': False, 'model__positive': False}
###Markdown
Feature Selection with Column TransformerCategorical and numerical variables need to be treated differently some times. We can loop back to the pipeline and column transformer stuff to incorporate the newer feature selection.
###Code
df = pd.read_csv("data/mtcars.csv")
df.head()
###Output
_____no_output_____
###Markdown
Feature TypesFor this, I am going to treat of cylinders, vs, am, of gears, and carb as categorical. This is a reasonable, but not necissarily correct, interpretation of the scenario - a domain knowledge decision.The data doesn't matter much here, and the example is tiny, but the feature selection stuff transfers as is to other datasets.
###Code
# Manually set categories as categories
df["cyl"] = df["cyl"].astype("category")
df["vs"] = df["vs"].astype("category")
df["am"] = df["am"].astype("category")
df["gear"] = df["gear"].astype("category")
df["carb"] = df["carb"].astype("category")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 32 entries, 0 to 31
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 model 32 non-null object
1 mpg 32 non-null float64
2 cyl 32 non-null category
3 disp 32 non-null float64
4 hp 32 non-null int64
5 drat 32 non-null float64
6 wt 32 non-null float64
7 qsec 32 non-null float64
8 vs 32 non-null category
9 am 32 non-null category
10 gear 32 non-null category
11 carb 32 non-null category
dtypes: category(5), float64(5), int64(1), object(1)
memory usage: 2.7+ KB
###Markdown
Setup PipelineWe have two pipelines that are then combined in our column transformer. The numerical one includes some rfecv to do feature selection on those variables. The categorical one includes k-best to do feature selection there. Each subset is feature selected, then the two subsets are combined.
###Code
#Data Split
cat_feat = ["cyl", "vs", "am", "gear", "carb"]
num_feat = ["disp", "hp", "drat", "wt", "qsec"]
y = df["mpg"]
X = df.drop(columns={"mpg", "model"})
X_train, X_test, y_train, y_test = train_test_split(X, y)
#estimators
model = LinearRegression()
selector = Lasso()
# RFECV on Numerical Data
min_features_to_select = 1 # Minimum number of features to consider
rfecv = RFECV(
estimator=selector,
step=1,
cv=3,
min_features_to_select=min_features_to_select,
)
num_pipe = Pipeline([
("rfecc", rfecv)
])
# KBest on Categorical Data
kbest = SelectKBest(k=2)
cat_pipe = Pipeline([
("kbest",kbest)
])
#Pre-processing and Column Transformer
prepro = ColumnTransformer([
("cat", cat_pipe, cat_feat),
("num", num_pipe, num_feat)
])
pipe = Pipeline([("prepro", prepro),
("model", model)
])
pipe.fit(X_train, y_train)
print("Score:", pipe.score(X_test, y_test))
#print(pipe.get_params("steps"))
###Output
Score: -0.16868065403596688
|
unfair-dice-problem/online-dice-game.ipynb | ###Markdown
 Playing with Dice Alice and BobAlice and Bob start with ten dollars each. They each roll a die, and whoever gets the bigger number takes one dollar from the other. Ties go to Alice. Repeat ten times: whoever ends up with the most money wins the round. Let's PlayWe will play on the computer, since we can't share in the era of Covid-19. Click the "Roll the dice" button below to roll the dice, see who wins. Repeat ten times.Click the "Reset" button to start again with ten dollars each.
###Code
import random
def roll_dice_two_players():
major_die = random.choice([1,2,3,4,5,6])
minor_die = random.choice([1,2,3,4,5,6])
print("Alice rolls ",major_die, "and Bob rolls", minor_die,".")
return (major_die >= minor_die)
from ipywidgets import widgets,Layout,Button,VBox,HBox
from IPython.display import display, Javascript, Markdown, HTML, clear_output
import pandas as pd
## We store the game results in a data frame, for convenience
df = pd.DataFrame({"Alice's Points": [ 0 for i in range(11)],
"Bob's Points": [ 0 for i in range(11)]},
index=[i for i in range(11)])
df.loc[0, "Alice's Points"] = 10
df.loc[0, "Bob's Points"] = 10
## We make a couple of buttons to roll the dice, and reset the game
style = {'description_width': 'initial'}
# Button widget
play_button = widgets.Button(
button_style='success',
description="Roll the dice",
layout=Layout(width='15%', height='30px'),
style=style
)
# Button widget
reset_button = widgets.Button(
button_style='danger',
description="Reset the game",
layout=Layout(width='15%', height='30px'),
style=style
)
turn_n = 0
def play_action(b):
global turn_n
clear_output()
display(tab2)
turn_n += 1
if (turn_n>10):
print("Game over! Who won this round?")
display(df[0:11])
else:
print("Turn #",turn_n)
if roll_dice_two_players():
print("Alice wins this roll. She gets one point from Bob.")
df.loc[turn_n, "Alice's Points"] = df.loc[turn_n-1,"Alice's Points"] +1
df.loc[turn_n, "Bob's Points"] = df.loc[turn_n-1,"Bob's Points"]-1
else:
print("Bob wins this roll. He gets one point from Alice.")
df.loc[turn_n, "Alice's Points"] = df.loc[turn_n-1,"Alice's Points"] -1
df.loc[turn_n, "Bob's Points"] = df.loc[turn_n-1,"Bob's Points"]+1
display(df[0:turn_n+1])
def reset_action(b):
global turn_n
clear_output()
display(tab2)
print("Alice and Bob start with ten points each.")
display(df[0:1])
turn_n=0
play_button.on_click( play_action )
reset_button.on_click( reset_action )
tab1 = HBox(children=[play_button,reset_button])
tab2 = widgets.Tab(children=[tab1])
tab2.set_title(0, 'Play')
display(tab2)
print("Alice and Bob start with ten points each.")
display(df[0:1])
# Connect widget to function - run subsequent cells
###Output
_____no_output_____
###Markdown
Let's Try Again....Bob is losing the ties! So let's make things more fair, give him two dollars when he wins.Try this weighted game instead. Who wins?
###Code
## re-use the same data frame
df.loc[0, "Alice's Points"] = 10
df.loc[0, "Bob's Points"] = 10
## We make a couple of buttons to roll the dice, and reset the game
style = {'description_width': 'initial'}
# Button widget
play_button2 = widgets.Button(
button_style='success',
description="Roll the dice",
layout=Layout(width='15%', height='30px'),
style=style
)
# Button widget
reset_button2 = widgets.Button(
button_style='danger',
description="Reset the game",
layout=Layout(width='15%', height='30px'),
style=style
)
turn2 = 0
def play_action2(b):
global turn2
clear_output()
display(tab4)
turn2 += 1
if (turn2>10):
print("Game over! Who won this round?")
display(df[0:11])
else:
print("Turn #",turn2)
if roll_dice_two_players():
print("Alice wins this roll. She gets one point from Bob.")
df.loc[turn2, "Alice's Points"] = df.loc[turn2-1,"Alice's Points"] +1
df.loc[turn2, "Bob's Points"] = df.loc[turn2-1,"Bob's Points"]-1
else:
print("Bob wins this roll. He gets two points from Alice.")
df.loc[turn2, "Alice's Points"] = df.loc[turn2-1,"Alice's Points"] -2
df.loc[turn2, "Bob's Points"] = df.loc[turn2-1,"Bob's Points"]+2
display(df[0:turn2+1])
def reset_action2(b):
global turn2
clear_output()
display(tab4)
print("Alice and Bob start with ten points each.")
display(df[0:1])
turn2=0
play_button2.on_click( play_action2 )
reset_button2.on_click( reset_action2 )
tab3 = HBox(children=[play_button2,reset_button2])
tab4 = widgets.Tab(children=[tab3])
tab4.set_title(0, 'Play')
display(tab4)
print("Alice and Bob start with ten points each.")
display(df[0:1])
###Output
_____no_output_____ |
pandas/PandasMissingData.ipynb | ###Markdown
Dealing with Missing/Invalid Data
###Code
import numpy as np
import pandas as pd
df = pd.DataFrame({'A': [1, 2, np.nan], 'B': [5, np.nan, np.nan], 'C': [1, 2, 3]})
df
###Output
_____no_output_____
###Markdown
Drop Rows or Columns with no values
###Code
df.dropna()
df.dropna(axis=1)
###Output
_____no_output_____
###Markdown
be forgiving: allow all rows with at least two non-NA values...
###Code
df.dropna(thresh=2)
###Output
_____no_output_____
###Markdown
Replace missing values
###Code
df.fillna(0)
df['A'].fillna(df['A'].mean(), inplace=True)
df
###Output
_____no_output_____ |
nb/2018_Autumn/Lecture7-Optimization-Using-Python-ORTools.ipynb | ###Markdown
Lecture 7: Optimization Using Python - ORTools In this lecture / tutorial, we will learn how to solve some simple optimization problems using Python. This involves a brief introduction to the various optimization libraries available, such as ```scipy.optimize```, ```ortools```, and ```cplex```. We will solve an example optimization problem using each library.*** Learning goals- Obtain an overview of optimization problems that can be easily solved using Python.- Know about some of the popular optimization libraries which have easy to use Python interfaces.- Learn the syntax to solve some simple optimization problems using at least a couple of the libraries discussed in this tutorial.- Test your understanding by solving a few of the practice problems in each section. *** Prerequisites for running this notebookYou should have Python 3.6 installed on your computer, with all necessary packages installed.We recommend that you install Anaconda (Python 3.6 version) from the following links depending on your OS:- For Windows: https://www.anaconda.com/download/windows- For macOS: https://www.anaconda.com/download/macos- For Linux: https://www.anaconda.com/download/linux**If you are not using Anaconda, it is your responsibility to make sure that Python and all necessary packages are correctly installed and configured to be able to run this notebook.*****Once Anaconda is installed, open a **Terminal** (if you are using macOS / Linux), or **Anaconda Prompt** (if you are using Windows), and then create a new Python environment called **cme193**, by running the following command:> ```conda create -n cme193 python=3.6```Next, change to the newly created virtual environment by running the command:On Windows> ```activate cme193``` On macOS or Linux> ```source activate cme193```Next install all the necessary packages by running the following commands:> ```conda install nb_conda``` > ```conda install -c anaconda scipy``` > ```conda install -c conda-forge matplotlib``` > ```conda install -c anaconda networkx``` > ```pip install ortools``` Now navigate to the directory containing this .ipynb file, from inside the terminal, and start jupyter notebook by typing the following command:> ```jupyter notebook```You should now be able to launch the .ipynb file from the browser. For more information on jupyter notebooks, read the user documentation. *** Introduction to OR-ToolsIn this section we will learn how to solve some simple optimization problems using the ```OR-Tools``` package. ```OR-Tools``` is an open source software suite for optimization, available from Google. It is possible to configure ```OR-Tools``` to use commercial solvers like ```CPLEX``` or ```Gurobi```, or open-source solvers like ```SCIP``` or ```GLPK```, but this involves building ```OR-Tools``` from source, and we will not discuss this here as it is an advanced topic that is not suited for an introductory course on Python. Instead we will focus on using Google's ```GLOP``` and ```CP-SAT``` solver which is available upon following the installation instructions, as described above. More information on ```OR-Tools``` can be found at the OR-Tools homepage. The user guide can be found here, which contains extensive documentation and lots of examples.**Note: Detailed documentation only exists for C++ interface. The documentation for the Python interface is mostly work in progress. But the examples provided by ```OR-Tools``` are good enough to do many sophisticated tasks at an introductory level!**The main tools provided by ```OR-Tools```, that we need to be aware of are solvers for the following broad category of problems:- ```Constraint Programming```: The specialized ```CP-SAT``` solver (or the old ```original CP solver```) has been designed specifically to solve these kind of problems. The current recommendation is to always use the ```CP-SAT``` solver whenever possible. We will mostly stick to this guideline in this tutorial, with a few possible exceptions.- ```Linear and Mixed Integer Linear Programming```: These are the kind of problems that the specialized library ```GLOP``` is designed to solve. For solving Mixed Integer Linear Programming (MILP) problems, the default installer uses the Coin-or branch and cut (CBC) open-source solver.- ```Vehicle Routing```: This is a specialized library designed specifically for solving routing problems.- ```Graph Algorithms```: Specialized library for finding shortest paths, max flows, min-cost flows and linear sum assignment.- ```Bin Packing```: Specialized library for bin packing problems such as knapsack.We will learn to use the ```OR-Tools``` library by solving a few examples in each of the above categories.We can import the ```OR-Tools``` library as follows (henceforth to be referred to as ```ortools```). We also import some other modules we will use in this notebook.
###Code
import ortools
import scipy.optimize as sciopt
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
*** Linear programmingWe have already seen how to solve linear programming (LP) examples using ```scipy.optimize```. So we will keep this discussion concise, and reuse a couple of the examples from there, and solve it using ```ortools```. More information on solving LPs using ```ortools``` can be found here. Google's open-source linear solver library ```GLOP``` is specifically designed for solving linear programs.```GLOP``` requires that the LP be expressed in the following form$$\begin{equation}\begin{split}\text{minimize} \;\; & c^{T}x \\\text{subject to} \;\; & b_{lb} \leq Ax \leq b_{ub}\end{split}\end{equation}$$where $c, x \in \mathbb{R}^n$, $A \in \mathbb{R}^{m \times n}$, and $b_{ub}, b_{lb} \in \mathbb{R}^{m}$. It should be noted that all LP can be put in this form, as for equality constraints we can set upper and lower bounds to be the same. If either upper or lower bound is not present, then one can set it to $-\infty$ and $\infty$ respectively, as shown in the examples below.Let us first import the python wrapper ```pywraplp``` for the underlying C++ solver using the following Python code.
###Code
from ortools.linear_solver import pywraplp
###Output
_____no_output_____
###Markdown
*** Example 1We consider an example that we have encountered previously on solving LPs using ```scipy.optimize```. The example is$$\begin{equation}\begin{split}\text{minimize} \;\; & x_1 + 2 x_2 - 3 x_3 \\\text{subject to} \;\; & |x_1| \leq 1 \\& |x_2| \leq 2 \\& |x_3| \leq 1 \\& x_1 + x_2 + x_3 = 1,\end{split}\end{equation}$$which we saw is equivalent to the following optimization problem$$\begin{equation}\begin{split}\text{minimize} \;\; & x_1 + 2 x_2 - 3 x_3 \\\text{subject to} \;\; & -1 \leq x_1 \leq 1 \\& -2 \leq x_2 \leq 2 \\& -1 \leq x_3 \leq 1 \\& x_1 + x_2 + x_3 = 1.\end{split}\end{equation}$$The basic steps involved in solving the LP with ```pywraplp``` are:- Declare the solver - the algorithm that solves the problem- Create the variables in the LP- Define the constraints- Define the objective function- Invoke the solver to solve the problem- Extract information about the solved problemWe demonstrate basic usage and implementation of these steps below using Python code.**Note: For each of the object handles we obtain below, there are a lot of methods for the object which can be accessed but not discussed in this tutorial. For example to access ```solver``` object's methods, just type ```solver.``` and hit ```tab``` on your keyboard in a Jupyter Notebook ```code cell```.** Declare the solverNotice that the argument ```pywraplp.Solver.GLOP_LINEAR_PROGRAMMING``` tells the solver to use ```GLOP```.
###Code
# Instantiate a Glop solver, naming it Example1
solver = pywraplp.Solver('Example1', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
###Output
_____no_output_____
###Markdown
Create the variables in the LPThe basic syntax is to call the ```solver``` object's method ```NumVar``` as ```solver.NumVar(lower bound, upper bound, name)```.
###Code
# Create the variables and put bounds on them thus incorporating the inequality constraints
x1 = solver.NumVar(-1, 1, 'x1')
x2 = solver.NumVar(-2, 2, 'x2')
x3 = solver.NumVar(-1, 1, 'x3')
###Output
_____no_output_____
###Markdown
Define the constraintsThis is done in two steps for each constraint:- Set the bounds on the constraints using the syntax ```constraint = solver.Constraint(lower bound, upper bound)```.- Set the coefficients of the variables using the created ```constraint``` object's ```SetCoefficient``` method as ```constraint.SetCoefficient(variable, coefficient)```.
###Code
# Constraint 1: x1 + x2 + x3 = 1
constraint1 = solver.Constraint(1, 1)
constraint1.SetCoefficient(x1, 1)
constraint1.SetCoefficient(x2, 1)
constraint1.SetCoefficient(x3, 1)
###Output
_____no_output_____
###Markdown
Define the objectiveThis is done in two steps for the obejctive:- Create the object ```objective``` by calling the ```Objective``` method of the ```solver``` object as ```objective = solver.Objective()```.- Set the coefficients of each variable in the objective function using the created ```objective``` object's method ```SetCoefficient``` using the syntax ```constraint.SetCoefficient(variable, coefficient)```.- Set whether to maximize or minimize the objective as ```objective.SetMaximization()``` or ```objective.SetMinimization()``` respectively.
###Code
# Objective function: x1 + 2 * x2 - 3 * x3
objective = solver.Objective()
objective.SetCoefficient(x1, 1)
objective.SetCoefficient(x2, 2)
objective.SetCoefficient(x3, -3)
objective.SetMinimization()
###Output
_____no_output_____
###Markdown
Invoke the solver to solve the problemCall the ```Solve``` method of the ```solver``` object as ```solver.Solve()```.
###Code
# Solve the problem and verify the problem has an optimal solution
status = solver.Solve()
assert status == pywraplp.Solver.OPTIMAL
###Output
_____no_output_____
###Markdown
Extract information about the solved problemThe following Python code shows how to extract information from the ```solver``` object.
###Code
# Print information of the problem
print('Number of variables =', solver.NumVariables())
print('Number of constraints =', solver.NumConstraints())
# The value of each variable in the solution
print('Solution:')
print('x1 = ', x1.solution_value())
print('x2 = ', x2.solution_value())
print('x3 = ', x3.solution_value())
# The objective value of the solution
print('Optimal objective value =', objective.Value())
###Output
Number of variables = 3
Number of constraints = 1
Solution:
x1 = 1.0
x2 = -1.0
x3 = 1.0
Optimal objective value = -4.0
###Markdown
*** Example 2Consider the example from before in the LP tutorial section using ```scipy.optimize```$$\begin{equation}\begin{split}\text{minimize} \;\; & x_1 + 2 x_2 \\\text{subject to} \;\; & x_1 \leq 1 \\& 5 x_1 + x_2 \geq 0 \\& x_1 + x_2 = 3.\end{split}\end{equation}$$We demonstrate a full Python program to solve it below, especially how to handle the constraint $5 x_1 + x_2 \geq 0$.
###Code
"""
Python code for Example 2
"""
def lp_example2():
# Instantiate a Glop solver, naming it Example 2
solver = pywraplp.Solver('Example1', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
# Create the variables and put bounds on them
x1 = solver.NumVar(-solver.infinity(), 1, 'x1')
x2 = solver.NumVar(-solver.infinity(), solver.infinity(), 'x2')
# Constraint 1: x1 + x2 = 3
constraint1 = solver.Constraint(3, 3)
constraint1.SetCoefficient(x1, 1)
constraint1.SetCoefficient(x2, 1)
# Constraint 2: 5 * x1 + x2 >= 0
constraint2 = solver.Constraint(0, solver.infinity())
constraint2.SetCoefficient(x1, 5)
constraint2.SetCoefficient(x2, 1)
# Objective function: x1 + 2 * x2
objective = solver.Objective()
objective.SetCoefficient(x1, 1)
objective.SetCoefficient(x2, 2)
objective.SetMinimization()
# Solve the system
status = solver.Solve()
# Print information of the problem
print('Number of variables =', solver.NumVariables())
print('Number of constraints =', solver.NumConstraints())
# The value of each variable in the solution
print('Solution:')
print('x1 = ', x1.solution_value())
print('x2 = ', x2.solution_value())
# The objective value of the solution
print('Optimal objective value =', objective.Value())
if __name__ == "__main__":
lp_example2()
###Output
Number of variables = 2
Number of constraints = 2
Solution:
x1 = 1.0
x2 = 2.0
Optimal objective value = 5.0
###Markdown
*** Exercise 1Study the Stigler diet example solved using ```GLOP``` on the documentation page. Change the problem in your own way, and modify the code to solve your modified problem.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
*** Mixed-integer linear programmingWhile solving combinatorial optimization problems, one often encounters situations where some of the variables are only allowed to be integers. If such a problem can be represented as an optimization problem with a cost function that is linear in the variables of the problem, and some (but not all) of the variables are constrained to be integers, then it is called a **Mixed Integer Linear Program (MILP)**. If all of the variables are constrained to be integers then it is called an **Integer Linear Program (ILP)**.```ortools``` provides us several options to solve these kinds of problems:- Mixed integer programming (MIP) solver- Constraint programming (CP) solver- Min cost flow solverOf these, the first two are very general and can be used to solve many different MILP problems, while the min cost flow solver can only solve structured problems representable as network flow problems. There are some key differences between all three of them. In this section we focus on the MIP solver, while the other two are discussed in later sections.The MIP solver that is provided by ```ortools``` is just an interface to the Coin-or branch and cut (CBC) open-source solver. While CBC allows the capability to also solve **Mixed Integer Quadratic Programming (MIQP)** problems, currently this capability is not wrapped by ```ortools```.The basic MILP problem type that we can solve using ```ortools``` is$$\begin{equation}\begin{split}\text{minimize} \;\; & c^{T}x \\\text{subject to} \;\; & b_{lb} \leq Ax \leq b_{ub}\end{split}\end{equation}$$where $x$ can be partitioned into two sets of variables $x = (x_1, x_2)$, with $x_1$ constrained to be integers, and $x_2$ not constrained to be integers. As in the case of LPs, note that any MILP can be put in this form; in particular for equality constraints we just set the upper and lower bounds to be the same. More information on solving MILPs with ```ortools``` can be found here.We illustrate the process of solving such problems using ```ortools``` with a few examples. The python wrapper ```pywraplp``` that we will use was already imported before. *** Example 1Consider the following optimization problem over the variables $x_1, x_2, x_3, x_4, x_5$$$\begin{equation}\begin{split}\text{minimize} \;\; & x_1 + 2 x_2 - 3 x_3 + x_4 \\\text{subject to} \;\; & 3 x_2 + x_4 + x_5 \leq 2 \\& -1 \leq x_1 + x_3 + x_4 \leq 1 \\& x_1 + 2 x_2 + x_3 = 10 \\& x_1, x_2 \in \{1,2\} \\& x_5 \in \{0,1,2\}.\end{split}\end{equation}$$The basic steps involved in solving this MILP with ```pywraplp``` are analogous to the LP case:- Declare the solver - the algorithm that solves the problem- Create the variables in the MILP- Define the constraints- Define the objective function- Invoke the solver to solve the problem- Extract information about the solved problemWe demonstrate basic usage and implementation of these steps below using Python code. Declare the solverNotice that the argument ```pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING``` tells the solver to use the MIP solver.
###Code
# Instantiate a mixed-integer solver, naming it Example1
solver = pywraplp.Solver('Example1', pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
###Output
_____no_output_____
###Markdown
Create the variables in the MILPThe basic syntax is to call the ```solver``` object's method ```NumVar``` as ```solver.NumVar(lower bound, upper bound, name)``` for defining non-integer variables, while for integer variables we need to call the ```solver``` object's method ```IntVar``` as ```solver.IntVar(lower bound, upper bound, name)```.
###Code
# Create the non-integer variables
x3 = solver.NumVar(-solver.infinity(), solver.infinity(), 'x3')
x4 = solver.NumVar(-solver.infinity(), solver.infinity(), 'x4')
# Create the integer variables and put bounds for the ones applicable
x1 = solver.IntVar(1, 2, 'x1')
x2 = solver.IntVar(1, 2, 'x2')
x5 = solver.IntVar(0, 2, 'x5')
###Output
_____no_output_____
###Markdown
Define the constraintsThis is done exactly as in the case of LP.
###Code
# Constraint 1: 3 * x2 + x4 + x5 <= 2
constraint1 = solver.Constraint(-solver.infinity(), 2)
constraint1.SetCoefficient(x2, 3)
constraint1.SetCoefficient(x4, 1)
constraint1.SetCoefficient(x5, 1)
# Constraint 2: -1 <= x1 + x3 + x4 <= 1
constraint2 = solver.Constraint(-1, 1)
constraint2.SetCoefficient(x1, 1)
constraint2.SetCoefficient(x3, 1)
constraint2.SetCoefficient(x4, 1)
# Constraint 3: x1 + 2 * x2 + x3 = 10
constraint3 = solver.Constraint(10, 10)
constraint3.SetCoefficient(x1, 1)
constraint3.SetCoefficient(x2, 2)
constraint3.SetCoefficient(x3, 1)
###Output
_____no_output_____
###Markdown
Define the objectiveThis is done exactly as in the case of LP.
###Code
# Objective function: x1 + 2 * x2 - 3 * x3 + x4
objective = solver.Objective()
objective.SetCoefficient(x1, 1)
objective.SetCoefficient(x2, 2)
objective.SetCoefficient(x3, -3)
objective.SetCoefficient(x4, 1)
objective.SetMinimization()
###Output
_____no_output_____
###Markdown
Invoke the solver to solve the problemCall the ```Solve``` method of the ```solver``` object as ```solver.Solve()```.
###Code
# Solve the problem and verify that an optimal solution has been found
status = solver.Solve()
assert status == pywraplp.Solver.OPTIMAL
###Output
_____no_output_____
###Markdown
Extract information about the solved problemThe following Python code shows how to extract information from the ```solver``` object.
###Code
# Print information of the problem
print('Number of variables =', solver.NumVariables())
print('Number of constraints =', solver.NumConstraints())
# The value of each variable in the solution
print('Solution:')
print('x1 = ', x1.solution_value())
print('x2 = ', x2.solution_value())
print('x3 = ', x3.solution_value())
print('x4 = ', x4.solution_value())
print('x5 = ', x5.solution_value())
# The objective value of the solution
print('Optimal objective value =', objective.Value())
###Output
Number of variables = 5
Number of constraints = 3
Solution:
x1 = 1.0
x2 = 1.0
x3 = 7.000000000000001
x4 = -9.000000000000002
x5 = 0.0
Optimal objective value = -27.0
###Markdown
*** Example 2: Weighted Vertex CoverThe **weighted vertex cover** is a classic problem in combinatorial optimization. The basic setting is that we have a simple graph $G(V,E)$, which means that is it is an undirected graph with no multiple edges and with no loops, and is equipped with a cost function defined on the set of vertices $c : V \rightarrow \mathbb{R}$. The goal is to find a subset of vertices $S \subset V$ that **cover** all the edges in $E$, such that the total sum of the cost function for the selected vertices is minimized. An edge $e \in E$ is said to be covered by $S$ if and only if there exists a vertex $v \in S$ that is an end point of $e$. Clearly this problem is feasible, as choosing $S=V$ covers all the edges in $E$.The goals of the weighted vertex cover problem can be expressed by an integer (binary) optimization problem. Let us assign a binary variable $x_v \in \{0,1\}$ for every vertex $v \in V$, with $x_v = 1$ if and only if $v \in S$, and $0$ otherwise. Then the goals of the weighted vertex cover problem can be expressed as the following ILP:$$\begin{equation}\begin{split}\text{minimize} \;\; & \sum_{v \in V} c(v) \; x_v \\\text{subject to} \;\; & x_u + x_v \geq 1, \;\; \forall \;\; \{u,v\} \in E \\& x_v \in \{0,1\}, \;\; \forall \;\; v \in V.\end{split}\end{equation}$$The first constraint says that if $\{u,v\}$ is an edge, then it must be covered, while the second constraint says that each vertex is either selected in the set $S$ or not.Let us take a concrete example. Let $V = \{1, 2, 3, 4, 5\}$, and $E = \{ \{1, 2\}, \{1, 3\}, \{2, 3\}, \{3, 4\}, \{1, 5\} \}$. Let the cost function be $c(1) = 1, \; c(2) = 20, \; c(3) = -2.5, \; c(4) = 0, \; \text{and} \; c(5) = 2$.We first visualize the graph using the ```NetworkX``` package which we have already imported before. More information on ```NetworkX``` can be found on its documentation page.
###Code
%matplotlib inline
# Function for visualizing graph
def graph_visualize(V, E, valmin=0, valmax=1, values=None):
"""
V: list of vertices
E: list of edges (each edge is a tuple of vertices)
"""
# Create an empty graph object
G = nx.Graph()
# Add the vertices to G
G.add_nodes_from(V)
# Add the edges to G
G.add_edges_from(E)
# Draw the graph
if values is None:
values = len(G.nodes()) * [0.5]
nx.draw_circular(G, with_labels=True, cmap=plt.get_cmap('Reds'), node_color=values, vmin=valmin, vmax=valmax)
else:
nx.draw_circular(G, with_labels=True, cmap=plt.get_cmap('Reds'), node_color=values, vmin=valmin, vmax=valmax)
if __name__ == "__main__":
# Create vertex list
V = [1, 2, 3, 4, 5]
# Create edge list
E = [(1, 2), (1, 3), (2, 3), (3, 4), (1, 5)]
# Create list of node values
values = [1, 20, -2.5, 0, 2]
# Print vertex and edge information
print("List of vertices:", V)
print("List of edges:", E)
print("List of node values:", values)
# Visualize the graph
print("\nDrawing the graph")
graph_visualize(V, E)
###Output
List of vertices: [1, 2, 3, 4, 5]
List of edges: [(1, 2), (1, 3), (2, 3), (3, 4), (1, 5)]
List of node values: [1, 20, -2.5, 0, 2]
Drawing the graph
###Markdown
The following Python code solves the weighted vertex cover problem using ```ortools```.
###Code
from ortools.linear_solver import pywraplp
def weighted_vertex_cover():
# Represent the problem data
V = [1, 2, 3, 4, 5]
E = [(1, 2), (1, 3), (2, 3), (3, 4), (1, 5)]
# Print the problem data
print("List of vertices in the graph:", V)
print("List of edges in the graph:", E)
# Instantiate a mixed-integer solver, naming it Weighted-Set-Cover
solver = pywraplp.Solver('Weighted-Vertex-Cover', pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
# Define integer binary variables.
x1 = solver.IntVar(0, 1, '1')
x2 = solver.IntVar(0, 1, '2')
x3 = solver.IntVar(0, 1, '3')
x4 = solver.IntVar(0, 1, '4')
x5 = solver.IntVar(0, 1, '5')
# Constraint 1 (edge (1,2) is covered): x1 + x2 >= 1
constraint1 = solver.Constraint(1, solver.infinity())
constraint1.SetCoefficient(x1, 1)
constraint1.SetCoefficient(x2, 1)
# Constraint 2 (edge (1,3) is covered): x1 + x3 >= 1
constraint2 = solver.Constraint(1, solver.infinity())
constraint2.SetCoefficient(x1, 1)
constraint2.SetCoefficient(x3, 1)
# Constraint 3 (edge (2,3) is covered): x2 + x3 >= 1
constraint3 = solver.Constraint(1, solver.infinity())
constraint3.SetCoefficient(x2, 1)
constraint3.SetCoefficient(x3, 1)
# Constraint 4 (edge (3,4) is covered): x3 + x4 >= 1
constraint4 = solver.Constraint(1, solver.infinity())
constraint4.SetCoefficient(x3, 1)
constraint4.SetCoefficient(x4, 1)
# Constraint 5 (edge (1,5) is covered): x1 + x5 >= 1
constraint5 = solver.Constraint(1, solver.infinity())
constraint5.SetCoefficient(x1, 1)
constraint5.SetCoefficient(x5, 1)
# Minimize 1 * x1 + 20 * x2 - 2.5 * x3 + 0 * x4 + 2 * x5
objective = solver.Objective()
objective.SetCoefficient(x1, 1)
objective.SetCoefficient(x2, 20)
objective.SetCoefficient(x3, -2.5)
objective.SetCoefficient(x4, 0)
objective.SetCoefficient(x5, 2)
objective.SetMinimization()
# Solve the problem and verify the problem has an optimal solution
result_status = solver.Solve()
assert result_status == pywraplp.Solver.OPTIMAL
# Print the selected subsets in the optimal solution, and extract the optimal value of all variables
print("\n")
print("The selected vertices are:")
values_opt = []
for item in ['1', '2', '3', '4', '5']:
var = solver.LookupVariable(item)
values_opt.append(var.solution_value())
if var.solution_value() == 1:
print(item)
# Display solution
graph_visualize(V, E)
plt.title("Original Graph", fontsize=16)
plt.show()
graph_visualize(V, E, 0, 2, values_opt)
plt.title("Vertex Cover", fontsize=16)
plt.show()
if __name__ == "__main__":
weighted_vertex_cover()
###Output
List of vertices in the graph: [1, 2, 3, 4, 5]
List of edges in the graph: [(1, 2), (1, 3), (2, 3), (3, 4), (1, 5)]
The selected vertices are:
1
3
4
###Markdown
*** Example 3: Weighted Set CoverThe **weighted set cover** problem is another classic problem in combinatorial optimization. Suppose that we are given a finite set $\mathcal{S}$ of elements, and another subset $\mathcal{T}$ of the power set of $\mathcal{S}$, i.e. $\mathcal{T} \subset 2^{\mathcal{S}}$, with the property that $\bigcup\limits_{t \in \mathcal{T}} t = \mathcal{S}$. There is also a cost function $w : \mathcal{T} \rightarrow \mathbb{R}$. The goal is to find a subset of $\mathcal{T}$ that covers all the elements in $\mathcal{S}$, such that the total sum of the costs of the selected elements of $\mathcal{T}$ is minimized.Formally our goals can be expressed as an integer (binary) optimization problem. Assign a binary variable $x_t \in \{0,1\}$ for every element $t \in \mathcal{T}$, which will be referred to as **subset indicator variables**. Also for all $t \in \mathcal{T}$, and $s \in \mathcal{S}$, we define $c_{ts} = 1$ if $s \in t$, and $c_{ts} = 0$ if $s \notin t$. Then our weighted set cover problem goals can be expressed by the following ILP:$$\begin{equation}\begin{split}\text{minimize} \;\; & \sum_{t \in \mathcal{T}} w(t) \; x_t \\\text{subject to} \;\; & \sum_{t \in \mathcal{T}} c_{ts} x_t \geq 1, \;\; \forall \;\; s \in \mathcal{S} \\& x_t \in \{0,1\}, \;\; \forall \;\; t \in \mathcal{T}.\end{split}\end{equation}$$The first constraint expresses the fact that each element $s \in \mathcal{S}$ is covered by at least one element $t \in \mathcal{T}$, which is the **set cover** constraint, from which the problem derives its name.Let us take a concrete example. Suppose $\mathcal{S} = \{1,2,3,4,5,6,7\}$, and let $\mathcal{T} = \{a,b,c,d,e\}$, where $$\begin{equation}\begin{split}a &= \{1,2,3\} \\b &= \{3,4,6\} \\c &= \{4,5\} \\d &= \{2,5,6,7\}.\end{split}\end{equation}$$We will represent $c_{ts}$ using a cost matrix $C$ defined below, with rows indexing elements of $\mathcal{T}$, and columns indexing elements of $\mathcal{S}$,$$C = \begin{bmatrix}1 & 1 & 1 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 1 & 0 & 1 & 0 \\0 & 0 & 0 & 1 & 1 & 0 & 0 \\0 & 1 & 0 & 0 & 1 & 1 & 1\end{bmatrix}\;\;.$$Also let the cost function $w$ be the constant function $w(t) = 1$, for all $t \in \mathcal{T}$, which corresponds to the **original set cover** problem, that seeks to minimize the number of selected subsets that cover the set $\mathcal{S}$.Here is the full Python code that solves the problem using ```ortools```.
###Code
from ortools.linear_solver import pywraplp
def weighted_set_cover():
# Represent the problem data
S = [1, 2, 3, 4, 5, 6, 7]
T = {'a':{1, 2, 3}, 'b':{3, 4, 6}, 'c':{4, 5}, 'd':{2, 5, 6, 7}}
# Print the problem
print("The set S of elements:")
for item in S:
print(item)
print("\n")
print("The set T of subsets of S:")
for key, val in T.items():
print(key, ":", val)
# Instantiate a mixed-integer solver, naming it Weighted-Set-Cover
solver = pywraplp.Solver('Weighted-Set-Cover', pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
# Define integer binary variables.
xa = solver.IntVar(0, 1, 'a')
xb = solver.IntVar(0, 1, 'b')
xc = solver.IntVar(0, 1, 'c')
xd = solver.IntVar(0, 1, 'd')
# Constraint 1: xa >= 1
constraint1 = solver.Constraint(1, solver.infinity())
constraint1.SetCoefficient(xa, 1)
# Constraint 2: xa + xd >= 1
constraint2 = solver.Constraint(1, solver.infinity())
constraint2.SetCoefficient(xa, 1)
constraint2.SetCoefficient(xd, 1)
# Constraint 3: xa + xb >= 1
constraint3 = solver.Constraint(1, solver.infinity())
constraint3.SetCoefficient(xa, 1)
constraint3.SetCoefficient(xb, 1)
# Constraint 4: xb + xc >= 1
constraint4 = solver.Constraint(1, solver.infinity())
constraint4.SetCoefficient(xb, 1)
constraint4.SetCoefficient(xc, 1)
# Constraint 5: xc + xd >= 1
constraint5 = solver.Constraint(1, solver.infinity())
constraint5.SetCoefficient(xc, 1)
constraint5.SetCoefficient(xd, 1)
# Constraint 6: xb + xd >= 1
constraint6 = solver.Constraint(1, solver.infinity())
constraint6.SetCoefficient(xb, 1)
constraint6.SetCoefficient(xd, 1)
# Constraint 7: xd >= 1
constraint6 = solver.Constraint(1, solver.infinity())
constraint6.SetCoefficient(xd, 1)
# Minimize xa + xb + xc + xd
objective = solver.Objective()
objective.SetCoefficient(xa, 1)
objective.SetCoefficient(xb, 1)
objective.SetCoefficient(xc, 1)
objective.SetCoefficient(xd, 1)
objective.SetMinimization()
# Solve the problem and verify the problem has an optimal solution
result_status = solver.Solve()
assert result_status == pywraplp.Solver.OPTIMAL
# Print the selected subsets in the optimal solution
print("\n")
print("The selected subsets are:")
for item in ['a', 'b', 'c', 'd']:
var = solver.LookupVariable(item)
if var.solution_value() == 1:
print(item, ":", T[item])
if __name__ == "__main__":
weighted_set_cover()
###Output
The set S of elements:
1
2
3
4
5
6
7
The set T of subsets of S:
a : {1, 2, 3}
b : {3, 4, 6}
c : {4, 5}
d : {2, 5, 6, 7}
The selected subsets are:
a : {1, 2, 3}
b : {3, 4, 6}
d : {2, 5, 6, 7}
###Markdown
*** Bin packing: multidimensional knapsackBin packing refers to the problem of finding a set of objects to pack into bins. The objects have **volumes**, and each bin has a **capacity**, which is the total volume the container can hold. We discuss the multidimensional knapsack problem here, which is arguably the most famous bin packing problem. More information on solving bin packing problems using ```ortools``` can be found here. Multidimensional knapsackThe setting involves a finite set of objects $\mathcal{S}$, each with $n + 1$ attributes. The first $n$ attributes are **volumes** (or some other property) of each object along $n$ different dimensions, and the last attribute is the **value** of each object. There is a **knapsack** (or container) which also has $n$ attributes associated with it (called **capacities**), and correspond to the total volume of objects that can fit along each dimension in the knapsack. The objective of the problem is to choose objects from $\mathcal{S}$ to put into the knapsack, such that the total value of all the objects is as large as possible, and the total volume of the selected objects do not exceed the capacity of the knapsack along any dimension.Mathematically the knapsack problem is equivalent to an ILP. We briefly mention this formulation here, as it shows the combinatorial structure of the problem. Assign a binary variable $x_s \in \{0,1\}$ for each element $s \in \mathcal{S}$. Let $v_s$ denote the value, and $c_{d,s}$ denote the volume along dimension $d$, of each element $s \in \mathcal{S}$. Also let $C_d$ denote the capacity of the knapsack along dimension $d$. Then the goals of multidimensional knapsack are expressed by the following optimization problem:$$\begin{equation}\begin{split}\text{minimize} \;\; & \sum_{s \in \mathcal{S}} x_s v_s \\\text{subject to} \;\; & \sum_{s \in \mathcal{S}} c_{d,s} x_s \leq C_d, \;\; \forall \;\; 1 \leq d \leq n \\& x_s \in \{0,1\}, \;\; \forall \;\; s \in \mathcal{S}.\end{split}\end{equation}$$While this problem can be certainly solved using the techniques developed in the last section on MILP, ```ortools``` provides a specialized solver called ```KnapsackSolver``` to solve this problem. The reader can find more details about using the solver on the documentation page. One thing to note is that ```KnapsackSolver``` only accepts **non-negative integer values** for values, volumes and capacities.We demonstrate how to use the solver using a simple example. But let us first import the python wrapper ```pywrapknapsack_solver``` for the underlying C++ solver using the following Python code.
###Code
from ortools.algorithms import pywrapknapsack_solver
###Output
_____no_output_____
###Markdown
Example 1Consider an instance of multidimensional knapsack in 2 dimensions ($d = 2$), where $\mathcal{S} = \{a, b, c, d, e\}$, and the knapsack capacities are $C_1 = 10, C_2 = 15$. Let the values of the objects be given by the following table:| $s$ | $v_s$ ||------|-------|| $a$ | $2$ || $b$ | $10$ || $c$ | $5$ || $d$ | $4$ || $e$ | $3$ |Let the volumes of the objects be given by the following table:| $s$ | $c_{1,s}$ | $c_{2,s}$ ||------|-------|-------|| $a$ | $1$ | $3$ || $b$ | $6$ | $6$ || $c$ | $3$ | $8$ || $d$ | $2$ | $1$ || $e$ | $5$ | $4$ |The problem can then be solved using ```ortools``` by following the steps as shown below. **Declare the values, volumes, and capacities**The ```KnapsackSolver``` accepts the data to be in a certain format. The values should be a list of the same length as the number of objects, while the capacities should be a list of length equal to the number of dimensions. The volumes of the objects should be a list of lists. The outer list need to have the same length as the number of dimensions, while each inner list must have the same length as the number of objects.
###Code
# Store the name of elements (this is not needed for the solver, but useful to display results)
objects = ['a', 'b', 'c', 'd', 'e']
# Declare the values, volumes and capacities
values = [2, 10, 5, 4, 3]
volumes = [[1, 6, 3, 2, 5], [3, 6, 8, 1, 4]]
capacities = [10, 15]
###Output
_____no_output_____
###Markdown
**Create an instance of ```KnapsackSolver```**The next step is to create an instance of ```KnapsackSolver```. It is important to use ```KNAPSACK_MULTIDIMENSION_BRANCH_AND_BOUND_SOLVER``` as shown below. Other options include ```KNAPSACK_DYNAMIC_PROGRAMMING_SOLVER```, but it can only solve 1 dimensional knapsacks.
###Code
# Create the solver, name it Example1
solver = pywrapknapsack_solver.KnapsackSolver(
pywrapknapsack_solver.KnapsackSolver.KNAPSACK_MULTIDIMENSION_BRANCH_AND_BOUND_SOLVER,
'Example1'
)
###Output
_____no_output_____
###Markdown
**Initialize the solver with the data**The next step feeds the problem data into the solver.
###Code
# Initialize the solver
solver.Init(values, volumes, capacities)
###Output
_____no_output_____
###Markdown
**Solve the problem**
###Code
# Solve the problem
computed_value = solver.Solve()
###Output
_____no_output_____
###Markdown
**Display the results**We can display the results as follows.
###Code
# Display results
packed_items = [objects[x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)]
packed_volumes = [
[volumes[0][x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)],
[volumes[1][x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)]
]
total_volumes = [sum(packed_volumes[0]), sum(packed_volumes[1])]
print("The maximum possible knapsack value is", computed_value)
print("Packed items: ", packed_items)
print("Total volumes: ", total_volumes)
###Output
The maximum possible knapsack value is 16
Packed items: ['a', 'b', 'd']
Total volumes: [9, 10]
###Markdown
Here is the full Python code in one place.
###Code
from ortools.algorithms import pywrapknapsack_solver
def multiknapsack(objects, values, volumes, capacities, name="multiknapsack"):
# Create the solver, name it Example1
solver = pywrapknapsack_solver.KnapsackSolver(
pywrapknapsack_solver.KnapsackSolver.KNAPSACK_MULTIDIMENSION_BRANCH_AND_BOUND_SOLVER,
name
)
# Initialize the solver
solver.Init(values, volumes, capacities)
# Solve the problem
computed_value = solver.Solve()
# Display results
packed_items = [objects[x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)]
packed_volumes = [
[volumes[0][x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)],
[volumes[1][x] for x in range(0, len(objects)) if solver.BestSolutionContains(x)]
]
total_volumes = [sum(packed_volumes[0]), sum(packed_volumes[1])]
print("The maximum possible knapsack value is", computed_value)
print("Packed items: ", packed_items)
print("Total volumes: ", total_volumes)
if __name__ == '__main__':
# Store the name of elements (this is not needed for the solver, but useful to display results)
objects = ['a', 'b', 'c', 'd', 'e']
# Declare the values, volumes and capacities
values = [2, 10, 5, 4, 3]
volumes = [[1, 6, 3, 2, 5], [3, 6, 8, 1, 4]]
capacities = [10, 15]
# Solve
multiknapsack(objects=objects, values=values, volumes=volumes, capacities=capacities, name="Example1")
###Output
The maximum possible knapsack value is 16
Packed items: ['a', 'b', 'd']
Total volumes: [9, 10]
###Markdown
*** Exercise 2Consider the 1 dimensional knapsack problem with the following data.
###Code
# Store the name of elements
objects = ['a', 'b', 'c', 'd', 'e']
# Declare the values, volumes and capacities
values = [2, 10, 5, 4, 3]
volumes = [[1, 6, 3, 2, 5]]
capacities = [10]
###Output
_____no_output_____
###Markdown
Solve the problem in three different ways:- Using ```pywrapknapsack_solver.KnapsackSolver.KNAPSACK_MULTIDIMENSION_BRANCH_AND_BOUND_SOLVER```.- Using ```pywrapknapsack_solver.KnapsackSolver.KNAPSACK_DYNAMIC_PROGRAMMING_SOLVER```.- Using ```pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING```.Time the different solvers.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
*** Constraint programming**Constraint programming (CP)** or **constraint optimization** refers to the task of finding feasible solutions to a set of arbitrary constraints, and such problems arise in many science and engineering applications. Thus CP is distinctly different from optimization problems; in fact in most cases, a CP may not even have an objective function, and the goal is to simply narrow down a large set of possible solutions to a more manageable subset by adding constraints to the problem. In fact, CP may arise as a subproblem in the solution process of an optimization problem. It should be noted however that any optimization problem can be solved this way by simply checking the objective function value at all the feasible solutions, and choosing the one that is the best. However this may be highly inefficient and hence is not recommended in most cases.```ortools``` provides two libraries for solving CP problems:- ```CP-SAT``` solver (SAT stands for **satisfiability**)- ```original CP``` solver.The recommended CP solver from Google is the ```CP-SAT``` solver, as it is much faster than the ```original CP``` solver, and we will strictly focus on the former in this lecture. More information on the two solvers, and some solved examples using each of them can be found by starting on the documentation page of the solvers. We will demonstrate the usage and syntax for ```CP-SAT``` using some examples. Most of the examples that we have chosen to illustrate are slight variants of the examples provided by ```ortools```, so that the reader can find more extensive discussion of these problems from online resources. This reference page also contains extensive documentation.It should be noted that the ```CP-SAT``` solver only works on integer data. However in most cases CP problems with non-integer data can be converted to CP problems with integer data using the techniques described for example here.The python wrappers ```cp_model``` and ```pywrapcp``` provide access to the underlying C++ solver for the ```CP-SAT``` solver and the ```original CP``` solver respectively. Let us import them, although we will not be using ```pywrapcp```.
###Code
from ortools.sat.python import cp_model
from ortools.constraint_solver import pywrapcp
###Output
_____no_output_____
###Markdown
*** Exercise 3It is very instructive to read through the code implementing the Python interface ```cp_model```, as described here:https://github.com/google/or-tools/blob/master/ortools/sat/python/cp_model.py. *** Example 1We work through the first example in detail to understand the basic syntax of ```CP-SAT```.Consider the following feasibility problem:$$\begin{equation}\begin{split}\text{find} \;\; & x, y \\\text{subject to} \;\; & x \neq y \\& x + y \leq 4 \\& 1 \leq 2x + y \leq 5 \\& x, y \in \{0,1,2,3\}.\end{split}\end{equation}$$The steps to model this problem using ```CP-SAT``` and solve it are explained below. Instantiate the solverWe need to create two objects - the ```model``` and the ```solver```, the first of which is used to model the problem, such as all the data and the constraints, while the second one solves the problem.
###Code
# Create the model and solver
model = cp_model.CpModel()
solver = cp_model.CpSolver()
###Output
_____no_output_____
###Markdown
Create the variablesWe then create the variables involved in the problem. Here we only need ```NewIntVar``` for the problem.**Note: Many other kinds of variables are available. You can see them by browsing the list after typing ```model.``` and pressing ```tab```.**
###Code
# Create the variables
num_values = 4
x = model.NewIntVar(0, num_values - 1, "x")
y = model.NewIntVar(0, num_values - 1, "y")
###Output
_____no_output_____
###Markdown
Create the constraintsThe next step is to create the constraints of the problem.
###Code
# Create the constraints
# Constraint 1: x != y
constraint1 = model.Add(x != y)
# Constraint 2: x + y <= 4
constraint2 = model.Add(x + y <= 4)
# Constraint 3: 1 <= 2x + y <= 5
constraint3 = model.AddLinearConstraint(terms=[(x, 2), (y, 1)], lb=1, ub=5)
###Output
_____no_output_____
###Markdown
Create the solution printerThe ```CP-SAT``` solver displays the results using a **solution printer**. The solution printer is a callback defined in a Python class, which we pass to the solver as shown below, and the callback is executed each time a new solution is found. It needs to be implemented as a class inherited from ```CpSolverSolutionCallback```. It is highly recommended that you check the code here. The method ```NewSolution``` must be implemented which gets called everytime the solver finds a new solution.
###Code
# Create the SolutionPrinter class
class SolutionPrinter(cp_model.CpSolverSolutionCallback):
"""
Print intermediate solutions.
"""
def __init__(self, variables):
self.__variables = variables
self.__solution_count = 0
def NewSolution(self):
self.__solution_count += 1
for v in self.__variables:
print('%s = %i,' % (v, self.Value(v)), end = ' ')
print()
def SolutionCount(self):
return self.__solution_count
# Create a solution printer
solution_printer = SolutionPrinter([x, y])
###Output
_____no_output_____
###Markdown
Call the solverWe can finally solve the problem by calling the solver. Here we will search for all solutions by using the method ```SearchForAllSolutions```.
###Code
# Call the solver, verify solution and pront results
print("Solving the CP problem...\n")
print("Printing all solutions...")
status = solver.SearchForAllSolutions(model, solution_printer)
assert status == cp_model.FEASIBLE
print('\nNumber of solutions found: %i' % solution_printer.SolutionCount())
###Output
Solving the CP problem...
Printing all solutions...
x = 1, y = 0,
x = 2, y = 0,
x = 2, y = 1,
x = 0, y = 1,
x = 0, y = 2,
x = 0, y = 3,
x = 1, y = 2,
x = 1, y = 3,
Number of solutions found: 8
###Markdown
*** Example 2This example illustrates how to implement ```AND``` and ```OR``` constraints. Consider the following feasibility problem:$$\begin{equation}\begin{split}\text{find} \;\; & x, y, z \\\text{subject to} \;\; & (x \neq y) \;\&\; (y \neq z) \;\&\; (z \neq x) \\& (x + y + z \leq 4) \text{ or } (1 \leq 2x + y \leq 5) \\& x, y, z \in \{0,1,2,3\}.\end{split}\end{equation}$$The following Python code then solves this problem using **channeling constraints**, as described here.
###Code
# Solution to Example 2
def cp_example2():
###############################################
# Create the model and solver
model = cp_model.CpModel()
solver = cp_model.CpSolver()
###############################################
# Create the variables
num_values = 4
x = model.NewIntVar(0, num_values - 1, "x")
y = model.NewIntVar(0, num_values - 1, "y")
z = model.NewIntVar(0, num_values - 1, "z")
# Create boolean variable needed to implement the OR constraint
b = model.NewBoolVar("b")
###############################################
# Create the constraints
#----------------------------------------------
# Constraint 1: (x != y) & (y != z) & (z != x)
model.AddAllDifferent([x, y, z])
#----------------------------------------------
# Constraint 2: (x + y + z <= 4) or (1 <= 2x + y <= 5)
model.Add(x + y + z <= 4).OnlyEnforceIf(b)
model.Add(x + y + z > 4).OnlyEnforceIf(b.Not())
model.AddLinearConstraint(terms=[(x, 2), (y, 1)], lb=1, ub=5).OnlyEnforceIf(b.Not())
###############################################
# Create a solution printer
solution_printer = SolutionPrinter([x, y, z, b])
# Call the solver, verify solution and pront results
print("Solving the CP problem...\n")
print("Printing all solutions...")
status = solver.SearchForAllSolutions(model, solution_printer)
assert status == cp_model.FEASIBLE
print('\nNumber of solutions found: %i' % solution_printer.SolutionCount())
if __name__ == "__main__":
cp_example2()
###Output
Solving the CP problem...
Printing all solutions...
x = 0, y = 2, z = 3, b = 0,
x = 1, y = 2, z = 3, b = 0,
x = 2, y = 0, z = 3, b = 0,
x = 2, y = 1, z = 3, b = 0,
x = 2, y = 1, z = 0, b = 1,
x = 3, y = 1, z = 0, b = 1,
x = 1, y = 3, z = 2, b = 0,
x = 0, y = 3, z = 2, b = 0,
x = 0, y = 3, z = 1, b = 1,
x = 1, y = 3, z = 0, b = 1,
x = 1, y = 2, z = 0, b = 1,
x = 0, y = 1, z = 2, b = 1,
x = 0, y = 2, z = 1, b = 1,
x = 0, y = 1, z = 3, b = 1,
x = 1, y = 0, z = 3, b = 1,
x = 1, y = 0, z = 2, b = 1,
x = 2, y = 0, z = 1, b = 1,
x = 3, y = 0, z = 1, b = 1,
Number of solutions found: 18
###Markdown
*** Example 3: SAT problems with constraintsFind a solution to the following **conjunctive normal form** (CNF) involving binary $\{0,1\}$ variables:$$(x_1 \lor x_2 \lor x_4) \land (\neg x_3 \lor x_5 \lor x_4) \land (x_2 \lor \neg x_4 \lor x_6) \land (x_1 \lor x_4 \lor x_5)$$subject to the additional constraint that$$x_2 \implies (x_5 \lor x_3) \land x_6.$$This is a specific instance of a 3-SAT problem with constraints. To solve this problem we need to use **reified constraints**. The Python code is given below.
###Code
# Solution to Example 3
def cp_example3():
###############################################
# Create the model and solver
model = cp_model.CpModel()
solver = cp_model.CpSolver()
###############################################
# Create the boolean variables
x1 = model.NewBoolVar("x1")
x2 = model.NewBoolVar("x2")
x3 = model.NewBoolVar("x3")
x4 = model.NewBoolVar("x4")
x5 = model.NewBoolVar("x5")
x6 = model.NewBoolVar("x6")
###############################################
# Create the constraints
#----------------------------------------------
# Constraint 1: 3-SAT clause
model.AddBoolOr([x1, x2, x4])
model.AddBoolOr([x3.Not(), x5, x4])
model.AddBoolOr([x2, x4.Not(), x6])
model.AddBoolOr([x1, x4, x5])
#----------------------------------------------
# Constraint 2: x2 => (x5 OR x3) & x6
# Create extra boolean variables to implement constraints
y1 = model.NewBoolVar("y1")
y2 = model.NewBoolVar("y2")
model.AddBoolOr([x5, x3]).OnlyEnforceIf(y1)
model.AddBoolAnd([x5.Not(), x3.Not()]).OnlyEnforceIf(y1.Not())
model.AddBoolAnd([y1, x6]).OnlyEnforceIf(y2)
model.AddBoolOr([y1.Not(), x6.Not()]).OnlyEnforceIf(y2.Not())
model.AddImplication(x2, y2)
"""
#---------------DIFFERENT WAY------------------
# Constraint 2: x2 => (x5 OR x3) & x6
# Create extra boolean variables to implement constraints
y1 = model.NewBoolVar("y1")
model.AddBoolOr([x5, x3]).OnlyEnforceIf(y1)
model.AddBoolAnd([x5.Not(), x3.Not()]).OnlyEnforceIf(y1.Not())
model.AddImplication(x2, y1)
model.AddImplication(x2, x6)
"""
###############################################
# Create a solution printer
solution_printer = SolutionPrinter([x1, x2, x3, x4, x5, x6])
# Call the solver, verify solution and pront results
print("Solving the CP problem...\n")
print("Printing all solutions...")
status = solver.SearchForAllSolutions(model, solution_printer)
assert status == cp_model.FEASIBLE
print('\nNumber of solutions found: %i' % solution_printer.SolutionCount())
if __name__ == "__main__":
cp_example3()
###Output
Solving the CP problem...
Printing all solutions...
x1 = 0, x2 = 0, x3 = 0, x4 = 1, x5 = 0, x6 = 1,
x1 = 1, x2 = 0, x3 = 0, x4 = 1, x5 = 0, x6 = 1,
x1 = 1, x2 = 0, x3 = 1, x4 = 1, x5 = 0, x6 = 1,
x1 = 0, x2 = 0, x3 = 1, x4 = 1, x5 = 0, x6 = 1,
x1 = 0, x2 = 0, x3 = 1, x4 = 1, x5 = 1, x6 = 1,
x1 = 1, x2 = 0, x3 = 1, x4 = 1, x5 = 1, x6 = 1,
x1 = 1, x2 = 0, x3 = 0, x4 = 1, x5 = 1, x6 = 1,
x1 = 0, x2 = 0, x3 = 0, x4 = 1, x5 = 1, x6 = 1,
x1 = 0, x2 = 1, x3 = 0, x4 = 1, x5 = 1, x6 = 1,
x1 = 1, x2 = 1, x3 = 0, x4 = 1, x5 = 1, x6 = 1,
x1 = 1, x2 = 1, x3 = 1, x4 = 1, x5 = 1, x6 = 1,
x1 = 0, x2 = 1, x3 = 1, x4 = 1, x5 = 1, x6 = 1,
x1 = 0, x2 = 1, x3 = 1, x4 = 1, x5 = 0, x6 = 1,
x1 = 1, x2 = 1, x3 = 1, x4 = 1, x5 = 0, x6 = 1,
x1 = 1, x2 = 0, x3 = 1, x4 = 0, x5 = 1, x6 = 1,
x1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 1, x6 = 1,
x1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 0, x6 = 1,
x1 = 1, x2 = 1, x3 = 0, x4 = 0, x5 = 1, x6 = 1,
x1 = 0, x2 = 1, x3 = 0, x4 = 0, x5 = 1, x6 = 1,
x1 = 0, x2 = 1, x3 = 1, x4 = 0, x5 = 1, x6 = 1,
x1 = 1, x2 = 1, x3 = 1, x4 = 0, x5 = 1, x6 = 1,
x1 = 1, x2 = 0, x3 = 1, x4 = 0, x5 = 1, x6 = 0,
x1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 1, x6 = 0,
x1 = 1, x2 = 0, x3 = 0, x4 = 0, x5 = 0, x6 = 0,
Number of solutions found: 24
###Markdown
*** Example 4: Integer optimizationCP can also be used to solve integer optimization problems in many cases. Consider the ILP:$$\begin{equation}\begin{split}\text{maximize} \;\; & x_1 + 2 x_2 - 3 x_3 + x_4 \\\text{subject to} \;\; & 3 x_2 + x_4 + x_5 \leq 10 \\& x_1 + x_3 + x_4 \leq 15 \\& x_1, x_2, x_3 \in \{1,2,3,4\} \\& x_4, x_5 \in \{0,1,2,3,4\}.\end{split}\end{equation}$$Here is the Python code that solves this problem using the ```CP-SAT``` solver.
###Code
# Solution to Example 4
def cp_example4():
# Create the model and solver
model = cp_model.CpModel()
solver = cp_model.CpSolver()
# Create the variables
x1 = model.NewIntVar(1, 4, "x1")
x2 = model.NewIntVar(1, 4, "x2")
x3 = model.NewIntVar(1, 4, "x3")
x4 = model.NewIntVar(0, 4, "x4")
x5 = model.NewIntVar(0, 4, "x5")
# Create the constraints
# Constraint 1: 3 * x2 + x4 + x5 <= 10
model.AddLinearConstraint(terms=[(x2, 3), (x4, 1), (x5, 1)], lb=0, ub=10)
# Constraint 2: x1 + x3 + x4 <= 15
model.AddSumConstraint([x1, x3, x4], lb=0, ub=15)
# Create the objective: x1 + 2 * x2 - 3 * x3 + x4
model.Maximize(x1 + 2 * x2 - 3 * x3 + x4)
# Call the solver
print("Solving the CP problem...\n")
status = solver.Solve(model)
# Verify solution and print result
assert status == cp_model.OPTIMAL
print("Optimal objective value:", solver.ObjectiveValue())
for var in [x1, x2, x3, x4, x5]:
print(var.name(), "=", solver.Value(var))
if __name__ == "__main__":
cp_example4()
###Output
Solving the CP problem...
Optimal objective value: 9.0
x1 = 4
x2 = 2
x3 = 1
x4 = 4
x5 = 0
###Markdown
*** Exercise 4: N-queens problemStudy the N-queen's problem as described on the documentation page. The problem is solved there using the ```original CP solver```. Solve it using ```CP-SAT``` solver.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
*** Scheduling problemsMany optimization problems involving assigning resources to perform a set of specific tasks, and at different times, frequently arise in the manufacturing industries, as well in the transportation and delivery sector. These problems are broadly classified under the umbrella of **scheduling problems**. Typically, the goal of such problems is to find a schedule that minimizes the total amount of time (or cost) required to complete all the tasks.The ```CP-SAT``` solver is capable of solving many such problems. Two specific problems that is somewhat widely applciable are:- The job shop problem.- The employee scheduling problem.The documentation of ```ortools``` guides you on how to solve these problems using the ```original CP solver```. In this tutorial, we will cover how to solve the **job shop problem** using ```CP-SAT```. You will be asked to do the same for the **employee scheduling problem** as an exercise. It is good to know both these problems, as they are extremely common. *** Example 1: The job shop problemLet us first describe the basic setup of the job shop problem. We have a finite set of jobs $J_1, \dots, J_N$, and each job has associated to it a finite set of tasks. So for the $i^{\text{th}}$ job $J_i$, we have the tasks $T_i = \{t_{i,1}, \dots, t_{i,m_i}\}$, and all the $m_i$ need not be the same, for all $1 \leq i \leq N$. We also have a finite set $P$ of processors on which the tasks can be executed, $P = \{p_1,\dots,p_M\}$. Next for each $i$, $1 \leq i \leq N$, we have a map $f_i: T_i \rightarrow P$, from the set of tasks to the set of processors and another function $g_i: T_i \rightarrow \mathbb{N}$, which signifies the amount of time taken by each task to complete.The basic task in the job shop scheduling problem is to create a plan of execution of all the tasks on the given processors, subject to the following constraints:- For each job $J_i$, if $1 \leq j_1 < j_2 \leq m_i$, then task $t_{i,j_2}$ can only start after task $t_{i,j_1}$ has finished completely.- Every task $t_{i,j}$ must necessarily execute on the processor $f_i(t_{i,j})$.- Only one task can be executed on a processor at any given time.The goal then is to minimize the total time of completion for all the jobs.**Note: In general the times taken by the tasks need not be integers, but we need this to be able to solve this problem using ```CP-SAT```.**In order to solve this problem, let us assign a non-negative integer variable $x_{i,j}$ that denotes the start time for task $t_{i,j}$, for all $i,j$. We will assume that each task has been assigned to the correct processor, and so there will not be a need to implement this constraint explicitly. The goals of the job shop scheduling problem can be expressed as the following combinatorial **minimax** problem:$$\begin{equation}\begin{split}\text{minimize} \;\; & \max \{x_{i,j} + g_i(t_{i,j}) : 1 \leq i \leq N, 1 \leq j \leq m_i \} \\\text{subject to} \;\; & x_{i,j} + g_i(t_{i,j}) \leq x_{i,j+1}, \; \forall \; 1 \leq i \leq N, 1 \leq j \leq m_i - 1 \\& \left( x_{i_1,j_1} + g_{i_1}(t_{i_1,j_1}) \leq x_{i_2,j_2} \right) \; \text{or} \; \left( x_{i_2,j_2} + g_{i_2}(t_{i_2,j_2}) \leq x_{i_1,j_1} \right), \; \forall \; i_1, i_2, j_1, j_2, \text{ such that } \; f_{i_1}(t_{i_1,j_1}) = f_{i_2}(t_{i_2,j_2}) \\& x_{i,j} \in \mathbb{N}, \; \forall \; 1 \leq i \leq N, 1 \leq j \leq m_i.\end{split}\end{equation}$$We take the specific instance of the job shop scheduling problem described here. The following Python code then solves this problem using ```CP-SAT```.
###Code
# Solution to the job shop problem
def job_shop_cpsat(machines, processing_times):
"""
Machines and processing times need to have the same shape
"""
###############################################
# Get number of jobs
num_jobs = len(machines)
# Get processor ids and number of different processors
procs = []
for job in machines:
for task_proc in job:
if task_proc not in procs:
procs.append(task_proc)
procs.sort()
num_procs = len(procs)
###############################################
# Create the model and solver
model = cp_model.CpModel()
solver = cp_model.CpSolver()
###############################################
# Get an upper bound on maximum time needed to solve the problem
ub = 0
for job in processing_times:
for task_time in job:
ub += task_time
###############################################
# Create start, end, and interval variables (one for each task)
variables_start = [[] for _ in range(num_jobs)]
variables_end = [[] for _ in range(num_jobs)]
variables_interval = [[] for _ in range(num_jobs)]
for i, job in enumerate(processing_times):
for j, task_time in enumerate(job):
start = model.NewIntVar(0, ub, "x" + str(i) + str(j))
end = model.NewIntVar(0, ub, "y" + str(i) + str(j))
interval = model.NewIntervalVar(start, task_time, end, "i" + str(i) + str(j))
variables_start[i].append(start)
variables_end[i].append(end)
variables_interval[i].append(interval)
# Create a list of interval variables by processors
variables_proc = [[] for _ in range(num_procs)]
for i, job in enumerate(machines):
for j, task_proc in enumerate(job):
variables_proc[task_proc].append(variables_interval[i][j])
###############################################
# Create the constraints
# Constraint 1 (no task for a job can start before all preceding tasks of the same job finish)
for i, job in enumerate(variables_start):
num_tasks = len(job)
for j in range(1, num_tasks):
model.Add(variables_start[i][j] >= variables_end[i][j - 1])
# Constraint 2 (each processor runs one task at any given time)
for interval_variables in variables_proc:
model.AddNoOverlap(interval_variables)
###############################################
# Create the objective
obj = model.NewIntVar(0, ub, "obj")
model.AddMaxEquality(obj, [var_end for job in variables_end for var_end in job])
model.Minimize(obj)
###############################################
# Call the solver
print("Solving the CP problem...\n")
status = solver.Solve(model)
###############################################
# Verify solution and print schedule
assert status == cp_model.OPTIMAL
print("Time needed to finish all jobs in the optimal schedule:", solver.ObjectiveValue())
print("\n")
print("Job schedule:")
for i, job in enumerate(variables_start):
print("\nJob " + str(i))
for j, _ in enumerate(job):
print(
"Task " + str(j) + ": start =",
solver.Value(variables_start[i][j]),
", end =",
solver.Value(variables_end[i][j]),
", proc =",
machines[i][j]
)
if __name__ == "__main__":
# Create data for the problem (machines and processing times need to have the same shape)
machines = [[0, 1, 2], [0, 2, 1], [1, 2]]
processing_times = [[3, 2, 2], [2, 1, 4], [4, 3]]
# Solve the problem
job_shop_cpsat(machines=machines, processing_times=processing_times)
###Output
Solving the CP problem...
Time needed to finish all jobs in the optimal schedule: 11.0
Job schedule:
Job 0
Task 0: start = 0 , end = 3 , proc = 0
Task 1: start = 4 , end = 6 , proc = 1
Task 2: start = 6 , end = 8 , proc = 2
Job 1
Task 0: start = 3 , end = 5 , proc = 0
Task 1: start = 5 , end = 6 , proc = 2
Task 2: start = 6 , end = 10 , proc = 1
Job 2
Task 0: start = 0 , end = 4 , proc = 1
Task 1: start = 8 , end = 11 , proc = 2
###Markdown
*** Exercise 5: Employee schedulingStudy the employee scheduling problem as described here. Solve it using the ```CP-SAT``` solver.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
*** Graph algorithmsMany problems in combinatorial optimization arise from graph theory; some examples are network flow problems, finding hamiltonian paths, finding shortest paths, and the traveling salesman problem, just to name a few. ```ortools``` provides two libraries - the ```algorithms``` library, and the ```graph``` library, that solves a great majority of these problems. The reader is encouraged to look up these libraries:- ```algorithms```: https://developers.google.com/optimization/reference/algorithms/.- ```graph```: https://developers.google.com/optimization/reference/graph/.In this tutorial we will look at the **network flow** class of problems. Generally speaking network flow problems involve transporting goods or materials across a network. The network could for example consist of cities, and roads or railways connecting them. In that case, the network can be represented as a graph, with the cities being represented by **vertices** and road / railway connection between cities being represented by **edges** or **arcs**. Each arc also comes with a capacity constraint representing the maximum amount of good that can be transported across it in unit time.We will look at two flow problems that arise quite frequently - the **maximum flow** problem, and the **minimum cost flow** problem, and will solve them using ```ortools```. More information on network flows and how to solve them using ```ortools``` can be found here.But first we import the graph library in Python.
###Code
from ortools.graph import pywrapgraph
###Output
_____no_output_____
###Markdown
*** Maximum flow problemThe maximum flow problem is described by a **directed** graph $G(V,E)$. An edge $e := (u,v), \; e \in E$, will denote a directed edge starting at the vertex $u \in V$ and ending at the vertex $v \in V$, and in addition each edge also has a capacity constraint, which are only required to be positive for the maximum flow problem, but in addition we will also need them to be postive integers for us to be able to solve them using ```ortools``` - thus we will assume that this is the case going forward. In addition, there are two special vertices in the graph called the **source** and the **sink**, which are denoted $s$ and $t$ respectively. A **valid flow** is an assignment of non-negative integers to the directed edges that satisfy the following constraints:- For every edge, the assigned flow does not exceed the capacity of the edge.- At every vertex, except $s$ and $t$, the net flow of the incident edges, i.e. the sum of flows of incoming edges minus the sum of flows of outgoing edges, must be zero.The objective of the maximum flow problem is to find a valid flow assignment that maximizes the net outflow from $s$, or alternatively the net inflow into $t$. Both of them are equivalent, and a proof of this fact can be found in any introductory graph theory textbook.Let us take a specific problem - in fact we will use the example problem described in the documentation page.The data for the problem is given by the list of tuples: ```(start_node, end_node, capacity)```. The first two entities in each tuple denote the start and end vertices respectively of a directed edge of a graph, and the third entity denotes the capacity.
###Code
# Data for the problem
data = [(0, 1, 20), (0, 2, 30), (0, 3, 10), (1, 2, 40), (1, 4, 30), (2, 3, 10), (2, 4, 20), (3, 2, 5), (3, 4, 20)]
# Declare source and sink
s = 0
t = 4
###Output
_____no_output_____
###Markdown
```ortools``` provides the method ```pywrapgraph.SimpleMaxFlow``` to solve this problem. The following Python code illustrates how to use it.
###Code
# Create lists for start, end, and capacities
start_nodes = []
end_nodes = []
capacities = []
for item in data:
start_nodes.append(item[0])
end_nodes.append(item[1])
capacities.append(item[2])
# Instantiate a SimpleMaxFlow solver
max_flow = pywrapgraph.SimpleMaxFlow()
# Add each arc
for i in range(0, len(start_nodes)):
max_flow.AddArcWithCapacity(start_nodes[i], end_nodes[i], capacities[i])
# Solve the maximum flow problem and check for optimality
status = max_flow.Solve(s, t)
assert status == max_flow.OPTIMAL
# Display results
print('Max flow:', max_flow.OptimalFlow())
print('')
print(' Arc Flow / Capacity')
for i in range(max_flow.NumArcs()):
print('%1s -> %1s %3s / %3s' % (max_flow.Tail(i), max_flow.Head(i), max_flow.Flow(i), max_flow.Capacity(i)))
print('Source side min-cut:', max_flow.GetSourceSideMinCut())
print('Sink side min-cut:', max_flow.GetSinkSideMinCut())
###Output
Max flow: 60
Arc Flow / Capacity
0 -> 1 20 / 20
0 -> 2 30 / 30
0 -> 3 10 / 10
1 -> 2 0 / 40
1 -> 4 20 / 30
2 -> 3 10 / 10
2 -> 4 20 / 20
3 -> 2 0 / 5
3 -> 4 20 / 20
Source side min-cut: [0]
Sink side min-cut: [4, 1]
###Markdown
*** Exercise 6- Run some simple experiments by choosing different nodes as $s$ and $t$ in the above example.- Change the problem data as you wish and find the maximum flow solution. *** Minimum cost flow problemThe minimum cost flow problem is an optimization problem that is encountered very frequently in logistics planning, and supply chain management. The basic idea is that there is a network, just like in the maximum flow problem, and there are some nodes where resources are produced, while there are other nodes where resources are consumed. The goal is to transport the resources from the supply nodes to the demand nodes at the minimum cost.The problem is closely related to the maximum flow problem, but there are some key differences. We again model the network using a **directed** graph $G(V,E)$. An edge (arc) $e := (u,v), \; e \in E$, denotes a directed edge starting at the vertex $u \in V$ and ending at the vertex $v \in V$, and as before has a capacity which is a postive integer (due to ```ortools``` requirement). In addition, there are special vertices in the graph called **supply** and **demand** nodes, where resources (flow) are either created or consumed respectively. In fact we will model all vertices as **supply** nodes, with the convention that a node is a supply node if and only if it has a positive integral supply of resources, it is a demand node if and only if it has a negative integral supply of resources (i.e. positive integral demand for resources), and a normal vertex if and only if it has exactly zero supply of resources. The supplies must be integers. Another difference as compared to the maximum flow problem is that there is also a unit cost (again non-negative integers) associated with transporting resources across each arc, and so if the flow value through an arc is $f$, and the unit cost for the arc is $c$, then the total cost incurred for that arc is $cf$.A **valid flow** is an assignment of non-negative integers to the directed edges that satisfy the following constraints:- For every edge, the assigned flow does not exceed the capacity of the edge.- At every vertex that is not a supply or demand node, the net flow of the incident edges, i.e. the sum of flows of outgoing edges minus the sum of flows of incoming edges, must be zero.- At a supply node, the net flow of the incident edges should equal the supply.- At a demand node, the net flow of the incident edges should equal the negative of the demand.It should be clear from the description above that the only way this could possibly work is if the supply at the **supply** nodes equal the demand at the **demand** nodes, i.e. in our language above the total sum of the supplies at all the vertices must be exactly zero!The goal of the minimum cost flow problem is then to design a valid flow which achieves the minimum cost. We demonstrate this using the specific example described in the documentation page.The data for the problem is given by the list of tuples: ```(start_node, end_node, capacity, unit_cost)```. The first two entities in each tuple denote the start and end vertices respectively of a directed edge of a graph, the third entity denotes its capacity, and the last element of the tuple denotes the cost of unit flow through the edge. The supplies for each node of the graph is also input.
###Code
# Data for the problem
data = [
(0, 1, 15, 4),
(0, 2, 8, 4),
(1, 2, 20, 2),
(1, 3, 4, 2),
(1, 4, 10, 6),
(2, 3, 15, 1),
(2, 4, 4, 3),
(3, 4, 20, 2),
(4, 2, 5, 3)
]
# Define an array of supplies at each node
supplies = [20, 0, 0, -5, -15]
###Output
_____no_output_____
###Markdown
```ortools``` provides the method ```pywrapgraph.SimpleMinCostFlow``` to solve this problem. The following Python code illustrates how to use it.
###Code
# Create lists for start, end, and capacities
start_nodes = []
end_nodes = []
capacities = []
unit_costs = []
for item in data:
start_nodes.append(item[0])
end_nodes.append(item[1])
capacities.append(item[2])
unit_costs.append(item[3])
# Instantiate a SimpleMinCostFlow solver
min_cost_flow = pywrapgraph.SimpleMinCostFlow()
# Add each arc.
for i in range(0, len(start_nodes)):
min_cost_flow.AddArcWithCapacityAndUnitCost(start_nodes[i], end_nodes[i], capacities[i], unit_costs[i])
# Add node supplies.
for i in range(0, len(supplies)):
min_cost_flow.SetNodeSupply(i, supplies[i])
# Solve the maximum flow problem and check for optimality
status = min_cost_flow.Solve()
assert status == min_cost_flow.OPTIMAL
# Display results
print('Minimum cost:', min_cost_flow.OptimalCost())
print('')
print(' Arc Flow / Capacity Cost')
for i in range(min_cost_flow.NumArcs()):
cost = min_cost_flow.Flow(i) * min_cost_flow.UnitCost(i)
print(
'%1s -> %1s %3s / %3s %3s' % (
min_cost_flow.Tail(i),
min_cost_flow.Head(i),
min_cost_flow.Flow(i),
min_cost_flow.Capacity(i),
cost
)
)
###Output
Minimum cost: 150
Arc Flow / Capacity Cost
0 -> 1 12 / 15 48
0 -> 2 8 / 8 32
1 -> 2 8 / 20 16
1 -> 3 4 / 4 8
1 -> 4 0 / 10 0
2 -> 3 12 / 15 12
2 -> 4 4 / 4 12
3 -> 4 11 / 20 22
4 -> 2 0 / 5 0
|
3d_segmentation/unet_segmentation_3d_catalyst.ipynb | ###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
%pip install -q "monai[nibabel, tensorboard]"
%pip install -q matplotlib
%matplotlib inline
%pip install -q catalyst==20.07
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
import torch
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
ToTensord,
)
from monai.utils import first
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(128, 128, 128, num_seg_classes=1, channel_dim=-1)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])]
val_files = [{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
ToTensord(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
ToTensord(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold_values=True)])
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=600,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(input_key="seg", output_key="logits"),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(valid=2),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric", metric_fn=lambda y_pred, y: dice_metric(post_trans(y_pred), y)[0], input_key="seg", output_key="logits"
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
%pip install -q "monai[nibabel, tensorboard]"
%pip install -q matplotlib
%matplotlib inline
%pip install -q catalyst==20.07
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
import torch
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadNiftid,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
ToTensord,
)
from monai.utils import first
print_config()
###Output
MONAI version: 0.2.0
Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Numpy version: 1.18.1
Pytorch version: 1.6.0
Optional dependencies:
Pytorch Ignite version: 0.3.0
Nibabel version: 3.1.1
scikit-image version: 0.15.0
Pillow version: 7.2.0
Tensorboard version: 2.1.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(128, 128, 128, num_seg_classes=1, channel_dim=-1)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])]
val_files = [{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadNiftid(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
ToTensord(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadNiftid(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
ToTensord(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold_values=True)])
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=600,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(input_key="seg", output_key="logits"),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(valid=2),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric", metric_fn=lambda y_pred, y: dice_metric(post_trans(y_pred), y)[0], input_key="seg", output_key="logits"
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold=0.5)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 50
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=$log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 600
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 50
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
ToTensord,
)
from monai.utils import first
import torch
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
ToTensord(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
ToTensord(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
max_epochs = 600
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: dice_metric(post_trans(y_pred), y)[0],
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 50
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
%pip install -q "monai[nibabel, tensorboard]"
%pip install -q matplotlib
%matplotlib inline
%pip install -q catalyst==20.07
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
import torch
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
AsChannelFirstd,
Compose,
LoadNiftid,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
ToTensord,
)
from monai.utils import first
print_config()
###Output
MONAI version: 0.2.0
Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Numpy version: 1.18.1
Pytorch version: 1.6.0
Optional dependencies:
Pytorch Ignite version: 0.3.0
Nibabel version: 3.1.1
scikit-image version: 0.15.0
Pillow version: 7.2.0
Tensorboard version: 2.1.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(128, 128, 128, num_seg_classes=1, channel_dim=-1)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])]
val_files = [{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadNiftid(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
ToTensord(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadNiftid(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
ToTensord(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, to_onehot_y=False, sigmoid=True, reduction="mean")
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=6,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(input_key="seg", output_key="logits"),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(valid=2),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric", metric_fn=dice_metric, input_key="seg", output_key="logits"
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
ToTensord,
)
from monai.utils import first
import torch
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
ToTensord(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
ToTensord(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
dimensions=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
max_epochs = 600
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: dice_metric(post_trans(y_pred), y)[0],
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold_values=True)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 50
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=$log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
3D segmentation with [MONAI](https://github.com/Project-MONAI/MONAI) and [Catalyst](https://github.com/catalyst-team/catalyst)This tutorial demonstrates how [MONAI](https://github.com/Project-MONAI/MONAI) can be used with the [Catalyst](https://github.com/catalyst-team/catalyst) framework for 3D segmentation task.And easily use below features:* Prepare synthetic data.* Load Nifti image with metadata.* Transforms for dictionary format data.* Add channel dim to the data if no channel dimension.* Scale medical image intensity with expected range.* Crop out a batch of balanced images based on positive / negative label ratio.* 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.* Sliding window inference method.* Deterministic training for reproducibility.This tutorial is based on [unet_training_dict.py](https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/torch/unet_training_dict.py) and [spleen_segmentation_3d.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb).[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/3d_segmentation/unet_segmentation_3d_catalyst.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[nibabel, tensorboard]"
!python -c "import matplotlib" || pip install -q matplotlib
!python -c "import catalyst" || pip install -q catalyst==20.07
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import logging
import os
import shutil
import sys
import tempfile
import catalyst.dl
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
from monai.config import print_config
from monai.data import Dataset, create_test_image_3d, list_data_collate, decollate_batch
from monai.inferers import sliding_window_inference
from monai.losses import DiceLoss
from monai.metrics import DiceMetric
from monai.networks.nets import UNet
from monai.transforms import (
Activations,
AsChannelFirstd,
AsDiscrete,
Compose,
LoadImaged,
RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
EnsureTyped,
EnsureType,
)
from monai.utils import first
import torch
print_config()
###Output
/opt/conda/lib/python3.8/site-packages/tqdm/std.py:725: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/workspace/data/medical
###Markdown
Setup logging
###Code
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
###Output
_____no_output_____
###Markdown
[MONAI](https://github.com/Project-MONAI/MONAI) components Prepare synthetic data
###Code
for i in range(40):
im, seg = create_test_image_3d(
128, 128, 128, num_seg_classes=1, channel_dim=-1
)
n = nib.Nifti1Image(im, np.eye(4))
nib.save(n, os.path.join(root_dir, f"img{i}.nii.gz"))
n = nib.Nifti1Image(seg, np.eye(4))
nib.save(n, os.path.join(root_dir, f"seg{i}.nii.gz"))
images = sorted(glob.glob(os.path.join(root_dir, "img*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
###Output
_____no_output_____
###Markdown
Prepare transforms and datasets
###Code
train_files = [
{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])
]
val_files = [
{"img": img, "seg": seg} for img, seg in zip(images[-20:], segs[-20:])
]
# define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
RandCropByPosNegLabeld(
keys=["img", "seg"],
label_key="seg",
spatial_size=[96, 96, 96],
pos=1,
neg=1,
num_samples=4,
),
RandRotate90d(keys=["img", "seg"], prob=0.5, spatial_axes=[0, 2]),
EnsureTyped(keys=["img", "seg"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "seg"]),
AsChannelFirstd(keys=["img", "seg"], channel_dim=-1),
ScaleIntensityd(keys=["img", "seg"]),
EnsureTyped(keys=["img", "seg"]),
]
)
# define dataset, data loader
check_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
check_loader = torch.utils.data.DataLoader(
check_ds, batch_size=2, num_workers=4, collate_fn=list_data_collate
)
check_data = first(check_loader)
print(check_data["img"].shape, check_data["seg"].shape)
# create a training data loader
train_ds = Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=2,
shuffle=True,
num_workers=4,
collate_fn=list_data_collate,
pin_memory=torch.cuda.is_available(),
)
# create a validation data loader
val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = torch.utils.data.DataLoader(
val_ds, batch_size=1, num_workers=4, collate_fn=list_data_collate
)
###Output
_____no_output_____
###Markdown
Prepare model, optimizer and metrics
###Code
# create UNet, DiceLoss and Adam optimizer
# device = torch.device("cuda:0") # you don't need device, because Catalyst uses autoscaling
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
)
loss_function = DiceLoss(sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
dice_metric = DiceMetric(include_background=True, reduction="mean")
post_trans = Compose(
[EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold=0.5)]
)
###Output
_____no_output_____
###Markdown
[Catalyst](https://github.com/catalyst-team/catalyst) experiment Setup Runner
###Code
class MonaiSupervisedRunner(catalyst.dl.SupervisedRunner):
def forward(self, batch):
if self.is_train_loader:
output = {self.output_key: self.model(batch[self.input_key])}
elif self.is_valid_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
elif self.is_infer_loader:
roi_size = (96, 96, 96)
sw_batch_size = 4
batch = self._batch2device(batch, self.device)
output = {
self.output_key: sliding_window_inference(
batch[self.input_key], roi_size, sw_batch_size, self.model
)
}
output = {**output, **batch}
return output
###Output
_____no_output_____
###Markdown
Run experiment
###Code
# define metric function to match MONAI API
def get_metric(y_pred, y):
y_pred = [post_trans(i) for i in decollate_batch(y_pred)]
dice_metric(y_pred=y_pred, y=y)
metric = dice_metric.aggregate().item()
dice_metric.reset()
return metric
max_epochs = 50
val_interval = 2
log_dir = os.path.join(root_dir, "logs")
runner = MonaiSupervisedRunner(
input_key="img", input_target_key="seg", output_key="logits"
) # you can also specify `device` here
runner.train(
loaders={"train": train_loader, "valid": val_loader},
model=model,
criterion=loss_function,
optimizer=optimizer,
num_epochs=max_epochs,
logdir=log_dir,
main_metric="dice_metric",
minimize_metric=False,
verbose=False,
timeit=True, # let's use minimal logs, but with time checkers
callbacks={
"loss": catalyst.dl.CriterionCallback(
input_key="seg", output_key="logits"
),
"periodic_valid": catalyst.dl.PeriodicLoaderCallback(
valid=val_interval
),
"dice_metric": catalyst.dl.MetricCallback(
prefix="dice_metric",
metric_fn=lambda y_pred, y: get_metric(y_pred, y),
input_key="seg",
output_key="logits",
),
},
load_best_on_end=True, # user-friendly API :)
)
###Output
_____no_output_____
###Markdown
Tensorboard logs
###Code
%load_ext tensorboard
%tensorboard --logdir=$log_dir
###Output
_____no_output_____
###Markdown
Best model performance visualisation
###Code
for i, valid_output in enumerate(runner.predict_loader(loader=val_loader)):
if i > 4:
break
plt.figure("check", (9, 3))
plt.subplot(1, 3, 1)
plt.title("image " + str(i))
plt.imshow(valid_output["img"].detach().cpu()[0, 0, :, :, 48], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label " + str(i))
plt.imshow(valid_output["seg"].detach().cpu()[0, 0, :, :, 48])
plt.subplot(1, 3, 3)
plt.title("output " + str(i))
logits = valid_output["logits"]
plt.imshow((logits[0] > 0.5).float().detach().cpu()[0, :, :, 48])
plt.show()
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____ |
Anomaly Detection/Single-Objective Generative Adversarial Active Learning/SO_GAAL_MinMaxScaler.ipynb | ###Markdown
Single-Objective Generative Adversarial Active Learning with MinMaxScaler This code template is for Anomaly detection/outlier analysis using the SO_GAAL Algorithm implemented using pyod library and feature scaling using MinMaxScaler.
Required Packages
###Code
!pip install plotly
!pip install pyod
import time
import warnings
import pandas as pd
import numpy as np
from scipy import stats
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import Isomap
from pyod.models.so_gaal import SO_GAAL
from sklearn.preprocessing import LabelEncoder,MinMaxScaler
from sklearn.model_selection import train_test_split
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
file_path= ''
###Output
_____no_output_____
###Markdown
List of features which are required for model training
###Code
features = []
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X.
###Code
X=df[features]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Data RescalingMinMaxScalerMinMaxScaler subtracts the minimum value in the feature and then divides by the range, where range is the difference between the original maximum and original minimum. We will fit an object of MinMaxScaler to train data then transform the same data via fit_transform(X_train) method, following which we will transform test data via transform(X_test) method.
###Code
X_Scaled=MinMaxScaler().fit_transform(X)
X_Scaled=pd.DataFrame(data = X_Scaled,columns = X.columns)
X_Scaled.head()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test=train_test_split(X_Scaled,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Model
Single-Objective Generative Adversarial Active Learning.
SO-GAAL directly generates informative potential outliers to assist the classifier in describing a boundary that can separate outliers from normal data effectively. Moreover, to prevent the generator from falling into the mode collapsing problem, the network structure of SO-GAAL is expanded from a single generator (SO-GAAL) to multiple generators with different objectives (MO-GAAL) to generate a reasonable reference distribution for the whole dataset
[For more information](https://pyod.readthedocs.io/en/latest/pyod.models.htmlmodule-pyod.models.so_gaal)
###Code
model = SO_GAAL(contamination=0.001,stop_epochs=11)
model.fit(x_train)
###Output
Epoch 1 of 33
Testing for epoch 1 index 1:
Testing for epoch 1 index 2:
Testing for epoch 1 index 3:
Testing for epoch 1 index 4:
Testing for epoch 1 index 5:
Testing for epoch 1 index 6:
Testing for epoch 1 index 7:
Testing for epoch 1 index 8:
Epoch 2 of 33
Testing for epoch 2 index 1:
Testing for epoch 2 index 2:
Testing for epoch 2 index 3:
Testing for epoch 2 index 4:
Testing for epoch 2 index 5:
Testing for epoch 2 index 6:
Testing for epoch 2 index 7:
Testing for epoch 2 index 8:
Epoch 3 of 33
Testing for epoch 3 index 1:
Testing for epoch 3 index 2:
Testing for epoch 3 index 3:
Testing for epoch 3 index 4:
Testing for epoch 3 index 5:
Testing for epoch 3 index 6:
Testing for epoch 3 index 7:
Testing for epoch 3 index 8:
Epoch 4 of 33
Testing for epoch 4 index 1:
Testing for epoch 4 index 2:
Testing for epoch 4 index 3:
Testing for epoch 4 index 4:
Testing for epoch 4 index 5:
Testing for epoch 4 index 6:
Testing for epoch 4 index 7:
Testing for epoch 4 index 8:
Epoch 5 of 33
Testing for epoch 5 index 1:
Testing for epoch 5 index 2:
Testing for epoch 5 index 3:
Testing for epoch 5 index 4:
Testing for epoch 5 index 5:
Testing for epoch 5 index 6:
Testing for epoch 5 index 7:
Testing for epoch 5 index 8:
Epoch 6 of 33
Testing for epoch 6 index 1:
Testing for epoch 6 index 2:
Testing for epoch 6 index 3:
Testing for epoch 6 index 4:
Testing for epoch 6 index 5:
Testing for epoch 6 index 6:
Testing for epoch 6 index 7:
Testing for epoch 6 index 8:
Epoch 7 of 33
Testing for epoch 7 index 1:
Testing for epoch 7 index 2:
Testing for epoch 7 index 3:
Testing for epoch 7 index 4:
Testing for epoch 7 index 5:
Testing for epoch 7 index 6:
Testing for epoch 7 index 7:
Testing for epoch 7 index 8:
Epoch 8 of 33
Testing for epoch 8 index 1:
Testing for epoch 8 index 2:
Testing for epoch 8 index 3:
Testing for epoch 8 index 4:
Testing for epoch 8 index 5:
Testing for epoch 8 index 6:
Testing for epoch 8 index 7:
Testing for epoch 8 index 8:
Epoch 9 of 33
Testing for epoch 9 index 1:
Testing for epoch 9 index 2:
Testing for epoch 9 index 3:
Testing for epoch 9 index 4:
Testing for epoch 9 index 5:
Testing for epoch 9 index 6:
Testing for epoch 9 index 7:
Testing for epoch 9 index 8:
Epoch 10 of 33
Testing for epoch 10 index 1:
Testing for epoch 10 index 2:
Testing for epoch 10 index 3:
Testing for epoch 10 index 4:
Testing for epoch 10 index 5:
Testing for epoch 10 index 6:
Testing for epoch 10 index 7:
Testing for epoch 10 index 8:
Epoch 11 of 33
Testing for epoch 11 index 1:
Testing for epoch 11 index 2:
Testing for epoch 11 index 3:
Testing for epoch 11 index 4:
Testing for epoch 11 index 5:
Testing for epoch 11 index 6:
Testing for epoch 11 index 7:
Testing for epoch 11 index 8:
Epoch 12 of 33
Testing for epoch 12 index 1:
Testing for epoch 12 index 2:
Testing for epoch 12 index 3:
Testing for epoch 12 index 4:
Testing for epoch 12 index 5:
Testing for epoch 12 index 6:
Testing for epoch 12 index 7:
Testing for epoch 12 index 8:
Epoch 13 of 33
Testing for epoch 13 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.7535
Testing for epoch 13 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7531
Testing for epoch 13 index 3:
16/16 [==============================] - 0s 953us/step - loss: 0.7543
Testing for epoch 13 index 4:
16/16 [==============================] - 0s 981us/step - loss: 0.7518
Testing for epoch 13 index 5:
16/16 [==============================] - 0s 989us/step - loss: 0.7567
Testing for epoch 13 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.7513
Testing for epoch 13 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7561
Testing for epoch 13 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.7628
Epoch 14 of 33
Testing for epoch 14 index 1:
16/16 [==============================] - 0s 880us/step - loss: 0.7658
Testing for epoch 14 index 2:
16/16 [==============================] - 0s 2ms/step - loss: 0.7608
Testing for epoch 14 index 3:
16/16 [==============================] - 0s 973us/step - loss: 0.7645
Testing for epoch 14 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.7625
Testing for epoch 14 index 5:
16/16 [==============================] - 0s 999us/step - loss: 0.7720
Testing for epoch 14 index 6:
16/16 [==============================] - 0s 921us/step - loss: 0.7733
Testing for epoch 14 index 7:
16/16 [==============================] - 0s 913us/step - loss: 0.7613
Testing for epoch 14 index 8:
16/16 [==============================] - 0s 895us/step - loss: 0.7742
Epoch 15 of 33
Testing for epoch 15 index 1:
16/16 [==============================] - 0s 956us/step - loss: 0.7717
Testing for epoch 15 index 2:
16/16 [==============================] - 0s 964us/step - loss: 0.7627
Testing for epoch 15 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.7638
Testing for epoch 15 index 4:
16/16 [==============================] - 0s 972us/step - loss: 0.7666
Testing for epoch 15 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.7681
Testing for epoch 15 index 6:
16/16 [==============================] - 0s 870us/step - loss: 0.7771
Testing for epoch 15 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7636
Testing for epoch 15 index 8:
16/16 [==============================] - 0s 997us/step - loss: 0.7709
Epoch 16 of 33
Testing for epoch 16 index 1:
16/16 [==============================] - 0s 1000us/step - loss: 0.7655
Testing for epoch 16 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7755
Testing for epoch 16 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.7668
Testing for epoch 16 index 4:
16/16 [==============================] - 0s 903us/step - loss: 0.7719
Testing for epoch 16 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.7725
Testing for epoch 16 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.7731
Testing for epoch 16 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7735
Testing for epoch 16 index 8:
16/16 [==============================] - 0s 995us/step - loss: 0.7850
Epoch 17 of 33
Testing for epoch 17 index 1:
16/16 [==============================] - 0s 955us/step - loss: 0.7877
Testing for epoch 17 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7725
Testing for epoch 17 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.7782
Testing for epoch 17 index 4:
16/16 [==============================] - 0s 960us/step - loss: 0.7813
Testing for epoch 17 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.7758
Testing for epoch 17 index 6:
16/16 [==============================] - 0s 991us/step - loss: 0.7846
Testing for epoch 17 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7861
Testing for epoch 17 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.7780
Epoch 18 of 33
Testing for epoch 18 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.7898
Testing for epoch 18 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7828
Testing for epoch 18 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.7859
Testing for epoch 18 index 4:
16/16 [==============================] - 0s 966us/step - loss: 0.7818
Testing for epoch 18 index 5:
16/16 [==============================] - 0s 982us/step - loss: 0.7956
Testing for epoch 18 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.7863
Testing for epoch 18 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7909
Testing for epoch 18 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.7925
Epoch 19 of 33
Testing for epoch 19 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.7903
Testing for epoch 19 index 2:
16/16 [==============================] - 0s 884us/step - loss: 0.7951
Testing for epoch 19 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.7990
Testing for epoch 19 index 4:
16/16 [==============================] - 0s 964us/step - loss: 0.8029
Testing for epoch 19 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.7921
Testing for epoch 19 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.7900
Testing for epoch 19 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.7944
Testing for epoch 19 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.7991
Epoch 20 of 33
Testing for epoch 20 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8004
Testing for epoch 20 index 2:
16/16 [==============================] - 0s 991us/step - loss: 0.8019
Testing for epoch 20 index 3:
16/16 [==============================] - 0s 997us/step - loss: 0.7984
Testing for epoch 20 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.7905
Testing for epoch 20 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8051
Testing for epoch 20 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8065
Testing for epoch 20 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8138
Testing for epoch 20 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.7932
Epoch 21 of 33
Testing for epoch 21 index 1:
16/16 [==============================] - 0s 973us/step - loss: 0.7924
Testing for epoch 21 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7974
Testing for epoch 21 index 3:
16/16 [==============================] - 0s 975us/step - loss: 0.7951
Testing for epoch 21 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8110
Testing for epoch 21 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8169
Testing for epoch 21 index 6:
16/16 [==============================] - 0s 967us/step - loss: 0.8142
Testing for epoch 21 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8022
Testing for epoch 21 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8173
Epoch 22 of 33
Testing for epoch 22 index 1:
16/16 [==============================] - 0s 970us/step - loss: 0.8089
Testing for epoch 22 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.7979
Testing for epoch 22 index 3:
16/16 [==============================] - 0s 960us/step - loss: 0.8244
Testing for epoch 22 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8072
Testing for epoch 22 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8162
Testing for epoch 22 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8202
Testing for epoch 22 index 7:
16/16 [==============================] - 0s 927us/step - loss: 0.8152
Testing for epoch 22 index 8:
16/16 [==============================] - 0s 939us/step - loss: 0.8234
Epoch 23 of 33
Testing for epoch 23 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8163
Testing for epoch 23 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8114
Testing for epoch 23 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8111
Testing for epoch 23 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8428
Testing for epoch 23 index 5:
16/16 [==============================] - 0s 982us/step - loss: 0.8237
Testing for epoch 23 index 6:
16/16 [==============================] - 0s 984us/step - loss: 0.8297
Testing for epoch 23 index 7:
16/16 [==============================] - 0s 983us/step - loss: 0.8273
Testing for epoch 23 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8282
Epoch 24 of 33
Testing for epoch 24 index 1:
16/16 [==============================] - 0s 998us/step - loss: 0.8124
Testing for epoch 24 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8196
Testing for epoch 24 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8249
Testing for epoch 24 index 4:
16/16 [==============================] - 0s 881us/step - loss: 0.8354
Testing for epoch 24 index 5:
16/16 [==============================] - 0s 981us/step - loss: 0.8278
Testing for epoch 24 index 6:
16/16 [==============================] - 0s 955us/step - loss: 0.8306
Testing for epoch 24 index 7:
16/16 [==============================] - 0s 961us/step - loss: 0.8257
Testing for epoch 24 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8284
Epoch 25 of 33
Testing for epoch 25 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8378
Testing for epoch 25 index 2:
16/16 [==============================] - 0s 925us/step - loss: 0.8379
Testing for epoch 25 index 3:
16/16 [==============================] - 0s 960us/step - loss: 0.8175
Testing for epoch 25 index 4:
16/16 [==============================] - 0s 974us/step - loss: 0.8262
Testing for epoch 25 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8271
Testing for epoch 25 index 6:
16/16 [==============================] - 0s 994us/step - loss: 0.8417
Testing for epoch 25 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8349
Testing for epoch 25 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8480
Epoch 26 of 33
Testing for epoch 26 index 1:
16/16 [==============================] - 0s 985us/step - loss: 0.8484
Testing for epoch 26 index 2:
16/16 [==============================] - 0s 976us/step - loss: 0.8390
Testing for epoch 26 index 3:
16/16 [==============================] - 0s 857us/step - loss: 0.8541
Testing for epoch 26 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8424
Testing for epoch 26 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8618
Testing for epoch 26 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8450
Testing for epoch 26 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8668
Testing for epoch 26 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8565
Epoch 27 of 33
Testing for epoch 27 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8520
Testing for epoch 27 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8448
Testing for epoch 27 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8465
Testing for epoch 27 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8563
Testing for epoch 27 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8562
Testing for epoch 27 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8583
Testing for epoch 27 index 7:
16/16 [==============================] - 0s 970us/step - loss: 0.8489
Testing for epoch 27 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8569
Epoch 28 of 33
Testing for epoch 28 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8341
Testing for epoch 28 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8489
Testing for epoch 28 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8549
Testing for epoch 28 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8486
Testing for epoch 28 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8599
Testing for epoch 28 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8520
Testing for epoch 28 index 7:
16/16 [==============================] - 0s 920us/step - loss: 0.8718
Testing for epoch 28 index 8:
16/16 [==============================] - 0s 2ms/step - loss: 0.8685
Epoch 29 of 33
Testing for epoch 29 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8570
Testing for epoch 29 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8424
Testing for epoch 29 index 3:
16/16 [==============================] - 0s 909us/step - loss: 0.8759
Testing for epoch 29 index 4:
16/16 [==============================] - 0s 991us/step - loss: 0.8676
Testing for epoch 29 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8701
Testing for epoch 29 index 6:
16/16 [==============================] - 0s 1ms/step - loss: 0.8713
Testing for epoch 29 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8700
Testing for epoch 29 index 8:
16/16 [==============================] - 0s 927us/step - loss: 0.8821
Epoch 30 of 33
Testing for epoch 30 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8632
Testing for epoch 30 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8815
Testing for epoch 30 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8839
Testing for epoch 30 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8739
Testing for epoch 30 index 5:
16/16 [==============================] - 0s 985us/step - loss: 0.8852
Testing for epoch 30 index 6:
16/16 [==============================] - 0s 997us/step - loss: 0.8824
Testing for epoch 30 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8518
Testing for epoch 30 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8862
Epoch 31 of 33
Testing for epoch 31 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8787
Testing for epoch 31 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8827
Testing for epoch 31 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.8849
Testing for epoch 31 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8874
Testing for epoch 31 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8971
Testing for epoch 31 index 6:
16/16 [==============================] - 0s 966us/step - loss: 0.8754
Testing for epoch 31 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.9044
Testing for epoch 31 index 8:
16/16 [==============================] - 0s 999us/step - loss: 0.8823
Epoch 32 of 33
Testing for epoch 32 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.9018
Testing for epoch 32 index 2:
16/16 [==============================] - 0s 1ms/step - loss: 0.8827
Testing for epoch 32 index 3:
16/16 [==============================] - 0s 983us/step - loss: 0.8866
Testing for epoch 32 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.9057
Testing for epoch 32 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.8873
Testing for epoch 32 index 6:
16/16 [==============================] - 0s 949us/step - loss: 0.8928
Testing for epoch 32 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8736
Testing for epoch 32 index 8:
16/16 [==============================] - 0s 991us/step - loss: 0.9049
Epoch 33 of 33
Testing for epoch 33 index 1:
16/16 [==============================] - 0s 1ms/step - loss: 0.8778
Testing for epoch 33 index 2:
16/16 [==============================] - 0s 953us/step - loss: 0.8965
Testing for epoch 33 index 3:
16/16 [==============================] - 0s 1ms/step - loss: 0.9006
Testing for epoch 33 index 4:
16/16 [==============================] - 0s 1ms/step - loss: 0.8946
Testing for epoch 33 index 5:
16/16 [==============================] - 0s 1ms/step - loss: 0.9139
Testing for epoch 33 index 6:
16/16 [==============================] - 0s 924us/step - loss: 0.8945
Testing for epoch 33 index 7:
16/16 [==============================] - 0s 1ms/step - loss: 0.8970
Testing for epoch 33 index 8:
16/16 [==============================] - 0s 1ms/step - loss: 0.8992
###Markdown
Anomaly Prediction
###Code
result=x_test.copy(deep=True)
result['Anomaly']=model.predict(x_test)
result.head()
###Output
_____no_output_____
###Markdown
Anomaly Visualization Bar Plot
###Code
result['Anomaly'].value_counts().plot(kind='bar',color=['green','red'])
###Output
_____no_output_____
###Markdown
Pie Chart
###Code
fig = px.pie(result['Anomaly'],names=result['Anomaly'], title='Anomaly rate',)
fig.show()
###Output
_____no_output_____
###Markdown
AnomaliesIn this part we will perform Dimensionality Reduction technique to visualize data. This can be performed using technique such as PCA or TSNE algorithms.
###Code
pca = PCA(n_components=2)
pca_results = pca.fit_transform(result.drop('Anomaly',axis=1))
plt.rcParams["figure.figsize"] = (20,10)
plt.scatter(x=pca_results[:,0],y=pca_results[:,1],c=result.iloc[:,result.columns.get_loc('Anomaly')])
plt.show()
###Output
_____no_output_____ |
sneaker resell market--sql analysis (Sep 2018 updated).ipynb | ###Markdown
Sneaker Resell Market--SQL analysis The purpose of this work is to 1) show some insights of the sneaker resell market; 2) practice some basic and intermediate level sql queries responding some add-hoc analysis needs Prework: at the very beginning, shout out to CatherineDevlin's guidance and development! I'd like to use ipython-sql-magic cells to realize running sql and python code at the same time https://github.com/catherinedevlin/ipython-sql Some Conclusions: 1) The most market-welcome resell price range is from 200 to 299 dollars, which weights 36.6% of the total sales. Given that the most common release price is 190, I may reach a conclusion that release+(0,100) is the most acceptable ask price range; 2) The market booming happened in 2017 July to Octomber,weighting 45.7% of the three-year-sales; 3) The most popular retro types are Air Jordan 4s,5s,11s,and 13s,counting 40.94% of total sales; 4) The common size for men are from US9.5 to US11,counting 49.69% of transactions; 5) (ad-hoc) The drving factors of the 2017 Fall Booming can be divided into two parts, one is that 11s SpaceJam, 7210 and 12s The master with good story, good quality and reasonable price; the another one is the 4s and 5s with the average quality but very attractive price (below release price)
###Code
%load_ext sql
%sql postgresql://postgres:714233@localhost/test
%sql SELECT * FROM pg_user;
#use this query to find the current user of postgresql,
#and the connecting string should follow "postgresql://{user}:{password}@localhost/some_database"
%sql select * from sneakers limit 5;
#this is the completed table after data cleaning and processing
#in the following lines, I will redo this work
###Output
* postgresql://postgres:***@localhost/test
5 rows affected.
###Markdown
1.1 Create table and Import the csv
###Code
%sql drop table sneakers;
# drop table because it has alreaday been created
%sql create table sneakers (id serial primary key,sneaker_name text, \
sales_day date,shoe_size varchar(6),price integer,retro_type text);
%sql select * from sneakers;
%sql copy sneakers from 'D:\removal of study material\big data I\final project-sneaker index\data\sneaker complete list.csv' \
delimiter ',' csv header;
# header here means ignore thr first row of the csv, cuz it is the column name
###Output
* postgresql://postgres:***@localhost/test
133036 rows affected.
###Markdown
2.1 Data Cleaning--Listwise deletion of the null records
###Code
%sql select * from sneakers where sneaker_name is null or sales_day is null or shoe_size is null or price is null \
or retro_type is null;
%sql delete from sneakers where sneaker_name is null or sales_day is null or shoe_size is null or price is null \
or retro_type is null;
###Output
* postgresql://postgres:***@localhost/test
21 rows affected.
###Markdown
2.2 Data Cleaning--Eliminating the PS or GS shoes
###Code
%sql select distinct shoe_size from sneakers order by 1 ASC limit 10;
# we find some size following W or C, those are women shoes and baby shoes
%sql select count (*) from sneakers where shoe_size like '%C' or shoe_size like '%W';
%sql delete from sneakers where shoe_size like '%C' or shoe_size like '%W';
%sql alter table sneakers alter column shoe_size type float USING shoe_size::double precision;
# after removing the W C sizes, we can modity the column nature to float
%sql delete from sneakers where shoe_size < 7;
# remove the GS size, focus on mens size
###Output
* postgresql://postgres:***@localhost/test
801 rows affected.
###Markdown
2.3 Data Cleaning--Check retro_type
###Code
%sql select distinct retro_type from sneakers;
%sql select *from sneakers where retro_type ='da' or retro_type ='Hy';
# based on the result, we need to fix 'da' to 2, and delete 'Hy' row
%sql delete from sneakers where retro_type ='Hy';
%sql update sneakers set retro_type='2' where retro_type='da';
%sql alter table sneakers alter column retro_type type integer USING retro_type::integer;
# after removing the strange value, we could convert some data type of column
###Output
* postgresql://postgres:***@localhost/test
Done.
###Markdown
2.4 Data Cleaning--Working on the prcie range and Detect outliners Based on the diff price range, we get the insight that almost 90% transactions are coming from price $100-499
###Code
%sql select sum (case when price between 0 and 99 then 1 else 0 end) as steal, \
ROUND(100.0 * sum (case when price between 0 and 99 then 1 else 0 end)/COUNT(id),1) as stealper,\
sum (case when price between 100 and 199 then 1 else 0 end) as economic,\
ROUND(100.0 * sum (case when price between 100 and 199 then 1 else 0 end)/COUNT(id),1) as ecoper, \
sum (case when price between 200 and 299 then 1 else 0 end) as brick,\
ROUND(100.0 * sum (case when price between 200 and 299 then 1 else 0 end)/COUNT(id),1) as brickper, \
sum (case when price between 300 and 399 then 1 else 0 end) as profit,\
ROUND(100.0 * sum (case when price between 300 and 399 then 1 else 0 end)/COUNT(id),1) as profitper, \
sum (case when price between 400 and 499 then 1 else 0 end) as hype, \
ROUND(100.0 * sum (case when price between 400 and 499 then 1 else 0 end)/COUNT(id),1) as hypeper, \
sum (case when price between 500 and 599 then 1 else 0 end) as lux, \
ROUND(100.0 * sum (case when price between 500 and 599 then 1 else 0 end)/COUNT(id),1) as luxper, \
sum (case when price between 600 and 40000 then 1 else 0 end) as super, \
ROUND(100.0 * sum (case when price between 600 and 40000 then 1 else 0 end)/COUNT(id),1) as superper from sneakers;
%sql drop table snkr;
%sql create table snkr as select * from sneakers where price between 100 and 499;
%sql select * from snkr limit 10;
###Output
* postgresql://postgres:***@localhost/test
10 rows affected.
###Markdown
2.5 Data Processing--add column with calculated metrics
###Code
#### Adding release price column to the table and calculte the profit
%sql alter table snkr add column release_price integer,add column profit float;
%sql update snkr set release_price= 190;
%sql update snkr set release_price= 160 where retro_type=1;
%sql update snkr set release_price= 210 where retro_type=11;
%sql update snkr set release_price= 175 where sneaker_name like'%Low%';
%sql update snkr set release_price= 160 where sneaker_name like'%2%Low%';
%sql update snkr set profit= round(.88*price-release_price,0);
%sql select * from snkr limit 5;
###Output
* postgresql://postgres:***@localhost/test
5 rows affected.
###Markdown
At this stage, we get the concise dataset named snkr which include records of only men's shoes and exclude the super profitable ones 3.1 Data Analytics--Which month has the largest trading volume
###Code
%sql with cte as (select distinct to_char(sales_day,'Mon') as month,to_char(sales_day,'YYYY') as year, \
count(id) over mont as volume,\
count (id) over () as total, \
round(1.*100 *count(id) over mont/count(id) over(),2) as percent \
from snkr window mont as (partition by date_part('month',sales_day),date_part('year',sales_day)) order by 5 DESC) \
select cte.*,sum(percent) over (order by volume DESC) as cumu_perc from cte limit 15;
###Output
* postgresql://postgres:***@localhost/test
15 rows affected.
###Markdown
From the result, we know that the most active months are Aug,Sep,Oct,July; in the following section, we will explore the driving factors within those months 3.2 Data Analytics--The most popular retro type
###Code
%sql with cte as (select distinct retro_type, count(id) over ret, \
round(1.*100* count(id) over ret/count(id) over(),2) as percent, \
cast(avg(profit) over ret as decimal(5,0)) as avgprofit \
from snkr window ret as (partition by retro_type)) \
select cte.*, sum(percent) over (order by count DESC) as cumu_perc from cte;
###Output
* postgresql://postgres:***@localhost/test
14 rows affected.
###Markdown
Now we know the market perfers retro 5, 4, 11, 13 over others 3.3 Data Analytics--Which size is the most popular
###Code
%sql with cte as (select distinct shoe_size, count(id) over soe, \
round(1.*100* count(id) over soe/count(id) over(),2) as percent, \
cast(avg(profit) over soe as decimal(5,0)) as avgprofit \
from snkr window soe as (partition by shoe_size)) \
select cte.*,sum(percent) over (order by count DESC) as cumu_percent from cte;
###Output
* postgresql://postgres:***@localhost/test
20 rows affected.
###Markdown
This result shows a range between 9.5 and 11, confirms the academic finding that nowadays the average shoe size of american men is US10. 3.4 Data Analytics--(ad-hoc)Which shoes most contribute to the high trading volume during hype months
###Code
%sql with cte as (select distinct retro_type,count(id) over (partition by retro_type) as count,\
round(1.*100*count(id) over (partition by retro_type)/count(id) over (),2) as percent from snkr \
where date_part('month',sales_day) in (7,10) and date_part('year',sales_day) =2017) \
select cte.*, sum(percent) over (order by count DESC) as cumu_percent from cte;
###Output
* postgresql://postgres:***@localhost/test
14 rows affected.
###Markdown
From the result, we may conclude that 11,12,4,5 contributed the 47.3% of sales within July to October period; in the following section, we will explore the certain ones
###Code
%sql select distinct sneaker_name, count(id) over (partition by sneaker_name),\
count(id) over() as total, round(1.*100*count(id) over (partition by sneaker_name)/count(id) over(),2) as percent, \
cast(avg(profit) over(partition by sneaker_name) as integer) as avgprofit, \
cast(avg(price) over(partition by sneaker_name) as integer) as avgprice \
from snkr where date_part('month',sales_day) in (7,10) and date_part('year',sales_day) =2017 \
and retro_type=11 order by 2 DESC limit 5;
%sql select distinct sneaker_name, count(id) over (partition by sneaker_name),\
count(id) over() as total, round(1.*100*count(id) over (partition by sneaker_name)/count(id) over(),2) as percent, \
cast(avg(profit) over(partition by sneaker_name) as integer) as avgprofit, \
cast(avg(price) over(partition by sneaker_name) as integer) as avgprice \
from snkr where date_part('month',sales_day) in (7,10) and date_part('year',sales_day) =2017 \
and retro_type=12 order by 2 DESC limit 5;
%sql select distinct sneaker_name, count(id) over (partition by sneaker_name),\
count(id) over() as total, round(1.*100*count(id) over (partition by sneaker_name)/count(id) over(),2) as percent, \
cast(avg(profit) over(partition by sneaker_name) as integer) as avgprofit, \
cast(avg(price) over(partition by sneaker_name) as integer) as avgprice \
from snkr where date_part('month',sales_day) in (7,10) and date_part('year',sales_day) =2017 \
and retro_type=4 order by 2 DESC limit 5;
%sql select distinct sneaker_name, count(id) over (partition by sneaker_name),\
count(id) over() as total, round(1.*100*count(id) over (partition by sneaker_name)/count(id) over(),2) as percent, \
cast(avg(profit) over(partition by sneaker_name) as integer) as avgprofit, \
cast(avg(price) over(partition by sneaker_name) as integer) as avgprice \
from snkr where date_part('month',sales_day) in (7,10) and date_part('year',sales_day) =2017 \
and retro_type=5 order by 2 DESC limit 5;
%sql select distinct sneaker_name,count(sneaker_name) from snkr where date_part('year',sales_day)=2017 \
and date_part('month',sales_day) in (7,10) and retro_type in (4,5,11,12) group by sneaker_name order by 2 DESC limit 10;
###Output
* postgresql://postgres:***@localhost/test
10 rows affected.
|
Europeancalulator.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import warnings
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
df = pd.read_csv("/content/drive/MyDrive/Test/BinomialOptions.csv")
df.head()
df.columns
# Initialise parameters
S0 = UnderlyingPrice # initial stock price /1/
K = StrikePrice # strike price /2/
T = ExpiryDate - Date # time to maturity in years
r = RiskFreeInterestRate # annual risk-free rate
N = 5 # number of timesteps
# u = # up-factor in binomial models
# d = 1/u # ensure recombining tree
sigma = ImpliedVolatility #Annualised stock price volatility
opttype = PullCall # Option Type 'C' or 'P'
###Output
_____no_output_____
###Markdown
Binomial Tree Slow
###Code
def CRR_method(K,T,S0,r,N,sigma,opttype='PullCall'):
#precomute constants
dt = T/N
u = np.exp(sigma*np.sqrt(dt))
d = 1/u
q = (np.exp(r*dt) - d) / (u-d)
disc = np.exp(-r*dt)
#initials asset prices at maturity - Time spend N
S = np.zeros(N+1)
S[0] = S0*d**N
for j in range(1,N+1):
S[j] = S[j-1]*u/d
# initialise option values at maturity///
C = np.zeros(N+1)
for j in range(0,N+1):
C[j] = max(0,S[j]-K)
# step backwars thourgh tree
for i in np.arange(N,0,-1):
for j in range(0,i):
C[j] = disc * ( q*C[j+1] + (1-q)*C[j] )
return C[0]
CRR_method(K,T,S0,r,N,sigma,opttype='C')
###Output
_____no_output_____
###Markdown
Binomial Tree Fast
###Code
def binomial_tree_fast(K,T,S0,r,N,u,d,opttype='C'):
#precomute constants
dt = T/N
q = (np.exp(r*dt) - d) / (u-d)
disc = np.exp(-r*dt)
#initials asset prices at maturity - Time spend N
C = S0 * d ** (np.arange(N,-1,-1)) * u ** (np.arange(0,N+1,1))
# initialise option values at maturity
C = np.maximum( C - K, np.zeros(N+1) )
# step backwars thourgh tree
for i in np.arange(N,0,-1):
C = disc * ( q * C[1:i+1] + (1-q) * C[0:i] )
return C[0]
###Output
_____no_output_____ |
python-machine-learning/ch04/ch04.ipynb | ###Markdown
欠測データへの対処* ほとんどの計算ツールは欠測値に対処できないか、予期せぬ動作をする。* 下記が欠測データの例
###Code
import pandas as pd
from io import StringIO
print('---欠測値を含むデータ---')
csv_data = '''A,B,C,D
1.0,2.0,3.0,4.0
5.0,6.0,,8.0
10.0,11.0,12.0,'''
df = pd.read_csv(StringIO(csv_data))
print(df)
print('---欠測値のカウント---')
print(df.isnull().sum())
###Output
---欠測値を含むデータ---
A B C D
0 1.0 2.0 3.0 4.0
1 5.0 6.0 NaN 8.0
2 10.0 11.0 12.0 NaN
---欠測値のカウント---
A 0
B 0
C 1
D 1
dtype: int64
###Markdown
欠測値を持つサンプル、特徴量を取り除く* 下記のように欠測値をもつ行や列を削除できる。* 削除しすぎると有益な情報が抜けてしまう場合がある。
###Code
print('---欠測値を含む行を削除---')
print(df.dropna())
print('\n---欠測値を含む列を削除---')
print(df.dropna(axis=1))
###Output
---欠測値を含む行を削除---
A B C D
0 1.0 2.0 3.0 4.0
---欠測値を含む列を削除---
A B
0 1.0 2.0
1 5.0 6.0
2 10.0 11.0
###Markdown
欠測値を補完する* 欠測値を列全体と置き換える
###Code
from sklearn.preprocessing import Imputer
# storategyはmean(平均値)の他にmedian(中央値)とmost_frequent(最頻値)が利用可能
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(df)
imputed_data = imr.transform(df.values)
print('列の平均値で補完')
print(imputed_data)
imr = Imputer(missing_values='NaN', strategy='mean', axis=1)
imr = imr.fit(df)
imputed_data = imr.transform(df.values)
print('\n行の平均値で補完')
print(imputed_data)
###Output
列の平均値で補完
[[ 1. 2. 3. 4. ]
[ 5. 6. 7.5 8. ]
[ 10. 11. 12. 6. ]]
行の平均値で補完
[[ 1. 2. 3. 4. ]
[ 5. 6. 6.33333333 8. ]
[ 10. 11. 12. 11. ]]
###Markdown
カテゴリデータの処理* 服のサイズ(S,M,L)などのカテゴリを表すデータのこと。* 名義と順序の特徴量を分ける必要がある。服のサイズは順序特徴量(S<M<L)だが、色は名義特徴量。
###Code
df = pd.DataFrame([
['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1'],
])
df.columns = ['色', 'サイズ', '価格', 'クラス']
df
###Output
_____no_output_____
###Markdown
順序特徴量のマッピング* カテゴリ文字列を整数化して推定器が扱えるようにする。
###Code
size_mapping = {'XL': 3, 'L':2, 'M':1}
df['サイズ'] = df['サイズ'].map(size_mapping)
df
###Output
_____no_output_____
###Markdown
クラスラベルのエンコーディングscikit-learnの内部で変換させるのではなく明示的に変換させるのが良い。
###Code
import numpy as np
class_mapping = {label:idx for idx, label in enumerate(np.unique(df['クラス']))}
print(class_mapping)
df['クラス'] = df['クラス'].map(class_mapping)
df
# 元に戻す
inv_class_mapping = {v:k for k, v in class_mapping.items()}
print(inv_class_mapping)
df['クラス'] = df['クラス'].map(inv_class_mapping)
df
from sklearn.preprocessing import LabelEncoder
# ラベルエンコーダで自動的に数値化してくれる
class_le = LabelEncoder()
y = class_le.fit_transform(df['クラス'].values)
print(y)
# 元に戻す
class_le.inverse_transform(y)
###Output
[0 1 0]
###Markdown
名義特徴量でのone-hotエンコーディング* 名義特徴量のエンコードを行う場合、LabelEncoderだと0,1,2...と順序特徴量のエンコーディングになってしまう。
###Code
X = df[['色', 'サイズ', '価格']].values
color_le = LabelEncoder()
X[:,0] = color_le.fit_transform(X[:, 0])
X
###Output
_____no_output_____
###Markdown
そこで、名義特徴量ごとに列を作り、ダミー特徴量を作成する。
###Code
from sklearn.preprocessing import OneHotEncoder
# categorical_featuresに指定した列がカテゴリ文字列として扱われ変換される。今回は1列目のみ。
ohe = OneHotEncoder(categorical_features=[0])
ohe.fit_transform(X).toarray()
###Output
_____no_output_____ |
examples/tbi_extractor_example.ipynb | ###Markdown
Example for using TBI Extractor 1. Installation
###Code
# From the examples directory of tbiExtractor, install tbi-extractor
%pip install ../.
# Imports
import datetime
import pandas as pd
from tbi_extractor import run_algorithm
###Output
_____no_output_____
###Markdown
2. tbiExtractor
###Code
# Gather input radiology report
report_file = 'report_one.txt'
# Show input
with open(report_file, 'r') as f:
print(f.read())
# Run tbiExtractor
df = run_algorithm.run(report_file)
# Show output
df
# Save output
get_today = datetime.date.today()
outfile = 'tbi_extractor_example_output_' + str(get_today) + '.csv'
df.to_csv(outfile, index=False)
###Output
_____no_output_____
###Markdown
3. Options: change the output format`save_target_phrases (bool)`: If True, save the lexical target phrases identified in the report for the resulting annotation. Default = False.`save_modifier_phrases (bool)`: If True, save the lexical modifier phrases identified in the report for the resulting annotation. Default = False.
###Code
report_file = 'report_two.txt'
with open(report_file, 'r') as f:
print(f.read())
df = run_algorithm.run(report_file,
save_target_phrases=True,
save_modifier_phrases=True)
df
###Output
_____no_output_____
###Markdown
4. Options: limit the lexical targets investigated> Can only set to include or exclude lexical target options to limit the search. Defaults to standard target list.`include_targets (list)`: A subset of the available lexical targets options to include. Default: None, resulting in standard target list output.`exclude_targets (list)`: A subset of the available lexical targets options to exclude. Default: None, resulting in standard target list output.
###Code
df = run_algorithm.run(report_file,
include_targets=['subdural_hemorrhage',
'epidural_hemorrhage'])
df
df = run_algorithm.run(report_file,
exclude_targets=['atrophy',
'aneurysm',
'fluid'])
df
###Output
_____no_output_____ |
jupyter notebooks/READ FILES.ipynb | ###Markdown
Reading files in different formatsNowadays, we can work with a large amount of data that comes from different sources and in different formats. Excel files with **xls** and **xlsx** formats, as well as other well-known formats such as **csv** and **txt**.In this practice we will see how we can read those files, as well as a series of basic commands to modify and manipulate those data frames in the first instance. Goal- Learn how we can open different data formats using **Pandas**.
###Code
# import libraries
#=================================
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Open excel filesTo open excel files we will use the function `pd.read_excel()`.In this case, within the excel files we find two formats, the **xls** and the **xlsx** format. It is important to know what format we are dealing with, because currently we will have to carry out one process or another when opening the files. **Xls files**In this case, we are going to use a data frame that collects information about the different creatures in the Pokemon world, specifically, all the 1st Generation Pokemon.
###Code
# Open Xls file
#=================================
poke = pd.read_excel("PokemonGen.xls")
poke.head()
###Output
_____no_output_____
###Markdown
Select column as indexIn this case, we have imported together with the data table the index generated in the previous program (either Excel or another). However, if we do not indicate the column or the name of the index, Python will generate yours automatically.To **select the index column** we will be using the `index_col` command.
###Code
# Select column by position
#=================================
# poke = pd.read_excel("PokemonGen.xls", index_col=0)
# Select column by name
#=================================
poke = pd.read_excel("PokemonGen.xls", index_col="Unnamed: 0")
poke.head()
###Output
_____no_output_____
###Markdown
Modify the indexMany times, it will happen that our data table already has an index. In that case, if it is an index that interests us and that is necessary to understand the data (populations, dates ...) we will have to **modify the index** that Python generates for us.To ** modify the index ** we will use the `df.index` function
###Code
# Select the index with the iloc function (column position)
#=================================
poke.index = poke.iloc[:, -1]
# Select the index indicating the name of the column
#=================================
poke.index = poke["#"]
# Remove the column in question
#=================================
poke = poke.drop("#", axis=1)
poke.head()
###Output
_____no_output_____
###Markdown
**Xlsx files**In this case, we are going to use a data frame that includes information about the performance of a group of students in different subjects, as well as a series of attributes of their environment.**For XLSX files**, it is currently necessary to add the `engine` command. This process is not necessary when opening xls files (at least currently).
###Code
# Open an excel file (xlsx)
#=================================
stu = pd.read_excel("StudentsPerformance.xlsx", engine="openpyxl")
stu.head()
###Output
_____no_output_____
###Markdown
Select a specific sheetWhen we deal with excel files or other similar ones, they can have more than one datasheet. On many occasions, we will have to select one or another sheet to work on it.The `read_excel` function shows you **by default the first sheet** of the file.To **select a sheet** we will use the command `sheet_name`.
###Code
# Selec sheet by position
#=================================
stu_sheet0 = pd.read_excel("StudentsPerformance.xlsx", sheet_name=0, engine="openpyxl")
# Select sheet by name
#=================================
stu_sheet1 = pd.read_excel("StudentsPerformance.xlsx", sheet_name="Sheet1", engine="openpyxl")
stu.head()
###Output
_____no_output_____
###Markdown
Modify column namesIn this case, we want to rename the columns. It is very **IMPORTANT** to avoid that our headers have **blank spaces**. This can cause us problems when we want to work with them or clean data.To do this, we can modify it directly when opening the file using the `names` command.
###Code
# Create a variable with corrected names
#=================================
new_names = ["Gender", "Race/Ethnicity", "Parental_Education", "Lunch", "Test_preparation_course", "Math_score",
"Reading_score", "Writing_score"]
# Assing new column names
#=================================
stu = pd.read_excel("StudentsPerformance.xlsx", engine="openpyxl", names=new_names)
stu.head()
# Create a plot using categorical data
#=================================
sns.set_theme(style="whitegrid")
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,10), dpi=100)
sns.boxplot(x=stu["Race/Ethnicity"],
y= stu["Math_score"],
hue=stu["Gender"],
ax=ax,
palette="Set2",
order=["group A","group B","group C","group D","group E"])
ax.set_xlabel("Race/Ethnicity", fontsize=12, fontweight="bold")
ax.set_ylabel("Math Score", fontsize=12, fontweight="bold")
ax.tick_params(labelsize=12)
fig.text(x=0.50, y=0.91, s="Math Score",fontsize=14, fontweight="bold",ha="center")
fig.text(x=0.50, y=0.89, s="Difference between Race/Ethnicity",fontsize=12, alpha=0.8,ha="center")
fig.tight_layout()
fig.subplots_adjust(top=0.88)
plt.show()
# More information about plots and visualizations in my tutorial plots & visualizations (chek out my Github)
###Output
_____no_output_____
###Markdown
Other files (csv, txt ...)To open csv files we will use the `read_csv` function.In this case, through the `read_csv` function we can open different files both **csv** and **txt**. We can also use commands that allow us to discern the **type of separator** we use, as well as many other functions. **Csv files**In this case, we are going to use a file that collects data from the Titanic, specifically, of each passenger and a serie of attributes of each of these. This is a csv of a series of files to make a predictive model to measure the survival of the passengers of the Titanic.
###Code
# Open a csv file
#=================================
train = pd.read_csv("train.csv")
train.head()
###Output
_____no_output_____
###Markdown
Modify name of headersIn this case, we have the names of the columns in English but we would be interested in changing them to the same ones in Spanish, since it is easier if we are not native English speakers.To do this, we will use the `df.cols` function.
###Code
# Create our list with names in Spanish
#=================================
cols = ["Pasajeros", "Supervivientes", "Clase", "Nombre", "Sexo", "Edad", "HerEsp",
"PaHi", "Ticket", "Tasa", "Cabina", "Embarque"]
# Assign to the columns our variable cols
#=================================
train.columns = cols
train.head()
###Output
_____no_output_____
###Markdown
Open txt filesIn this case, we will use a file that collects data about a series of students and how the performance of extracurricular activities influences them.To open txt files we will use the `read_csv` function.In this case, it is a comma separated file and therefore it is the same as opening a csv file. However, on many occasions we will work with other separations such as **tabs** or separated by **semicolons**. In those cases, we can also use the `read_csv` function and we will also use the` sep = `command to indicate how our data is separated.
###Code
# Open a txt file
#=================================
after = pd.read_csv("AfterSchool.txt", index_col=0, sep=",")
after.head()
###Output
_____no_output_____
###Markdown
It's all by now! Session Information
###Code
from sinfo import sinfo
sinfo()
###Output
-----
matplotlib 3.3.2
pandas 1.1.5
seaborn 0.11.1
sinfo 0.3.1
-----
IPython 7.19.0
jupyter_client 6.1.7
jupyter_core 4.7.0
jupyterlab 2.2.6
notebook 6.1.6
-----
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.19041-SP0
8 logical CPU cores, Intel64 Family 6 Model 126 Stepping 5, GenuineIntel
-----
Session information updated at 2021-04-28 11:51
|
Object Detection - mobilenet (coco dataset)/Object Detection.ipynb | ###Markdown
Object Detection with SSD mobilenet
###Code
from IPython.display import Image
Image(filename='detect.jpg')
###Output
_____no_output_____
###Markdown
Importing the libraries
###Code
import cv2
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We use SSD mobilenet v3 on coco dataset
###Code
config_file ='ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
frozen_model='frozen_inference_graph.pb'
model = cv2.dnn_DetectionModel(frozen_model,config_file)
classlabels = []
file_name = "labels.txt"
with open(file_name,'rt') as fpt:
classlabels = fpt.read().rstrip('\n').split('\n')
###Output
_____no_output_____
###Markdown
List Of Labels
###Code
print(classlabels)
len(classlabels)
model.setInputSize(320,320)
model.setInputScale(1.0/127.5)
model.setInputMean((127.5,127.5,127.5))
model.setInputSwapRB(True)
###Output
_____no_output_____
###Markdown
Lets test our model on an image
###Code
img = cv2.imread("img1.jpg")
plt.imshow(img)
ClassIndex, confidence, bbox = model.detect(img,confThreshold=0.5)
print(ClassIndex)
font_scale =3
font =cv2.FONT_HERSHEY_PLAIN
for ClassInd, conf, boxes in zip(ClassIndex.flatten(),confidence.flatten(),bbox):
cv2.rectangle(img,boxes,(255,0,0),2)
cv2.putText(img,classlabels[ClassInd-1],(boxes[0]+10,boxes[1]+40),font,fontScale=font_scale,color = (0,255,0),thickness =3)
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Lets Test our model on a youtube video!
###Code
cap = cv2.VideoCapture('demo.mp4')
if not cap.isOpened():
cap = cv2.VideoCapture(0)
if not cap.isOpened():
raise IOError("Cant open video")
font_scale = 3
font = cv2.FONT_HERSHEY_PLAIN
while True:
ret,frame = cap.read()
ClassIndex, confidence, bbox = model.detect(frame,confThreshold=0.55)
print(ClassIndex)
if (len(ClassIndex) != 0):
for ClassInd, conf, boxes in zip(ClassIndex.flatten(),confidence.flatten(),bbox):
if(ClassInd<80):
cv2.rectangle(frame,boxes,(255,0,0),2)
cv2.putText(frame,classlabels[ClassInd-1],(boxes[0]+10,boxes[1]+40),font,fontScale=font_scale,color = (0,255,0),thickness =3)
cv2.imshow('Object Detection Tutorial',frame)
if cv2.waitKey(2) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
()
[[72]]
()
()
()
()
()
[[3]]
[[3]]
[[8]
[1]]
[[8]]
[[1]
[3]
[3]]
[[3]
[1]]
[[3]
[1]
[6]
[3]]
[[3]
[1]
[1]
[1]
[3]
[8]]
[[3]
[1]
[1]
[3]
[1]
[8]
[3]
[3]]
[[3]
[1]
[1]
[3]
[6]
[8]
[3]]
[[3]
[1]
[6]
[1]
[1]
[3]
[3]
[8]
[1]]
[[3]
[1]
[6]
[1]
[1]
[3]
[1]
[3]
[8]
[6]
[3]]
[[3]
[1]
[1]
[6]
[1]
[1]
[3]
[3]
[3]
[6]]
[[3]
[1]
[1]
[6]
[3]
[6]
[1]
[1]]
[[6]
[1]
[3]
[3]
[1]
[6]
[1]
[6]
[1]
[6]]
[[3]
[3]
[6]
[1]
[1]
[6]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[3]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[6]
[3]
[6]]
[[6]
[3]
[1]
[1]
[3]
[6]
[1]
[6]]
[[6]
[1]
[1]
[3]
[3]
[6]
[1]
[6]]
[[6]
[1]
[3]
[1]
[1]
[3]
[1]
[6]
[6]]
[[1]
[6]
[3]
[1]
[1]
[1]
[3]
[6]]
[[1]
[6]
[3]
[1]
[1]
[3]
[1]
[6]
[6]]
[[1]
[6]
[1]
[3]
[3]
[6]
[6]]
[[1]
[1]
[6]
[3]
[3]]
[[1]
[3]
[1]
[6]
[3]]
[[1]
[3]
[6]
[3]
[1]
[3]]
[[1]
[3]
[3]
[6]]
[[1]
[3]
[3]
[6]
[8]]
[[1]
[3]
[3]
[6]
[1]]
[[1]
[3]
[3]
[6]
[1]]
[[1]
[3]
[3]
[6]
[1]]
[[1]
[6]
[3]
[3]
[1]
[1]]
[[1]
[6]
[3]
[3]
[1]
[1]]
[[1]
[6]
[3]
[3]]
[[1]
[6]
[3]
[3]]
[[1]
[3]
[6]
[3]]
[[1]
[3]
[6]
[3]
[1]
[3]]
[[1]
[6]
[3]
[3]]
[[6]
[3]
[3]
[1]
[6]]
[[6]
[3]
[3]
[1]]
[[3]
[3]
[1]
[8]
[6]]
[[8]
[3]
[6]
[3]
[6]
[1]
[6]]
[[3]
[8]
[6]
[3]
[6]]
[[3]
[8]
[3]
[6]
[6]
[6]]
[[3]
[3]
[6]
[8]
[3]]
[[3]
[6]
[3]
[1]
[6]
[6]
[6]]
[[3]
[3]
[6]
[6]
[1]]
[[3]
[3]
[1]
[6]
[9]]
[[3]
[6]
[3]
[6]]
[[3]
[3]
[6]
[6]]
[[3]
[6]
[3]
[6]
[6]
[6]]
[[3]
[6]
[6]
[6]
[3]
[6]]
[[3]
[6]
[6]
[6]
[6]
[6]]
[[3]
[6]
[6]
[6]
[6]]
[[3]
[6]
[3]
[6]
[1]
[6]]
[[3]
[6]
[3]
[8]
[1]
[3]
[6]
[6]]
[[3]
[3]
[1]
[8]]
[[3]
[1]
[6]
[8]
[3]]
[[3]
[1]
[6]
[8]]
[[3]
[1]
[6]
[8]]
[[3]
[6]
[1]
[8]]
[[3]
[6]
[1]
[3]
[8]
[6]]
[[3]
[3]
[1]
[6]
[8]]
[[3]
[9]
[1]
[6]
[3]]
[[3]
[6]
[9]]
[[3]
[6]
[9]]
[[3]
[9]
[6]
[3]]
[[6]
[3]
[3]
[1]
[6]]
[[3]
[6]
[6]
[3]
[1]]
[[3]
[6]
[6]
[9]]
[[3]
[3]
[6]]
[[3]
[6]
[6]
[3]
[6]]
[[3]
[6]
[6]]
[[6]
[3]
[6]
[3]]
[[6]
[3]
[3]
[6]
[6]
[3]
[8]]
[[6]
[3]
[6]
[3]
[6]
[6]]
[[6]
[3]
[6]
[3]
[6]
[6]]
[[6]
[3]
[3]
[6]
[6]
[6]]
[[6]
[3]
[6]
[3]
[6]]
[[6]
[3]
[3]
[6]]
[[6]
[3]
[6]
[3]]
[[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]]
[[6]
[3]
[3]
[6]]
[[6]
[3]
[3]
[6]
[6]
[1]]
[[6]
[3]
[3]
[6]
[1]
[6]
[6]]
[[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[6]]
[[6]
[3]
[3]
[6]]
[[6]
[3]
[3]
[6]]
[[6]
[3]
[3]
[6]
[6]
[1]]
[[6]
[3]
[1]
[3]
[6]]
[[6]
[3]
[3]
[1]
[6]]
[[6]
[3]
[3]
[6]
[1]]
[[6]
[6]
[3]
[3]]
[[3]
[6]
[3]
[6]]
[[6]
[3]
[6]
[3]]
[[3]
[6]
[6]
[3]
[1]]
[[6]
[3]
[3]
[6]
[1]
[6]]
[[6]
[3]
[6]
[1]
[1]
[6]
[1]]
[[3]
[6]
[1]
[6]
[1]
[1]
[3]]
[[3]
[1]
[6]
[1]
[6]
[1]
[3]
[6]]
[[1]
[3]
[6]
[3]
[1]
[8]]
[[3]
[1]
[3]
[6]
[1]
[6]]
[[3]
[3]
[6]
[1]
[1]
[1]
[6]
[6]]
[[6]
[3]
[1]
[1]
[3]
[6]
[6]
[3]]
[[1]
[3]
[6]
[3]
[6]
[6]
[1]]
[[6]
[3]
[1]
[1]
[1]
[3]
[6]
[1]
[6]
[6]]
[[1]
[1]
[3]
[1]
[6]
[6]
[3]
[6]]
[[1]
[1]
[1]
[6]
[6]
[3]
[6]]
[[1]
[6]
[1]
[1]
[6]
[3]
[3]
[6]
[6]]
[[1]
[6]
[1]
[3]
[6]
[3]
[1]
[6]]
[[1]
[1]
[3]
[1]
[6]
[3]
[6]
[3]]
[[1]
[1]
[1]
[8]
[6]
[6]
[3]
[3]
[6]
[1]]
[[3]
[3]
[1]
[1]
[6]
[8]
[1]
[1]
[6]
[6]]
[[3]
[8]
[1]
[1]
[3]
[6]
[1]
[3]
[6]]
[[8]
[1]
[3]
[1]
[3]
[3]
[6]]
[[3]
[8]
[3]
[1]
[1]
[1]
[6]
[3]
[6]
[6]]
[[8]
[3]
[1]
[1]
[1]
[3]
[6]]
[[8]
[1]
[3]
[1]
[1]
[6]
[3]]
[[3]
[1]
[6]
[1]
[3]
[8]
[1]
[6]
[6]
[6]]
[[3]
[8]
[6]
[1]
[3]
[1]
[1]]
[[3]
[6]
[3]
[8]
[1]
[1]
[6]
[1]
[1]
[6]
[3]]
[[3]
[6]
[6]
[3]
[1]
[1]
[1]
[3]
[8]]
[[3]
[6]
[3]
[1]
[6]
[1]
[8]
[1]
[6]
[1]]
[[3]
[1]
[6]
[8]
[3]
[1]
[6]
[1]
[6]
[1]]
[[3]
[1]
[6]
[3]
[1]
[8]
[1]
[1]
[3]
[6]]
[[1]
[3]
[6]
[3]
[1]
[8]
[6]
[1]
[3]
[1]
[6]]
[[1]
[6]
[3]
[8]
[6]
[6]
[1]
[3]
[1]
[1]
[3]
[6]]
[[8]
[1]
[3]
[6]
[6]
[1]
[1]
[3]
[1]
[6]]
[[6]
[3]
[1]
[1]
[3]
[6]
[1]
[3]
[8]
[6]
[1]]
[[3]
[1]
[6]
[1]
[6]
[3]
[6]
[1]
[1]
[3]]
[[1]
[3]
[1]
[6]
[6]
[3]
[6]
[1]]
[[1]
[6]
[3]
[3]
[6]
[3]
[3]
[1]
[8]]
[[1]
[6]
[3]
[6]
[3]
[1]
[1]
[1]]
[[3]
[1]
[6]
[3]
[6]
[1]]
[[1]
[3]
[6]
[6]
[3]
[6]
[1]]
[[1]
[3]
[6]
[6]
[3]
[6]
[1]]
[[3]
[1]
[6]
[3]
[6]
[1]]
[[6]
[3]
[6]
[3]
[1]
[1]
[6]
[1]]
[[3]
[3]
[6]
[6]
[1]
[1]]
[[3]
[6]
[6]
[1]
[1]
[1]]
[[3]
[6]
[3]
[1]
[6]
[1]]
[[3]
[3]
[6]
[1]
[6]
[1]]
[[3]
[1]
[6]
[3]
[6]
[1]
[1]]
[[3]
[6]
[1]
[1]
[6]
[6]]
[[3]
[3]
[1]
[1]
[6]
[6]
[6]
[1]
[6]]
[[6]
[1]
[3]
[6]
[1]
[3]
[3]
[6]
[1]]
[[1]
[3]
[1]
[6]
[3]
[3]
[6]
[1]]
[[1]
[3]
[6]
[6]
[1]
[1]
[6]]
[[1]
[3]
[6]
[1]
[1]
[3]
[6]
[6]]
[[3]
[1]
[6]
[1]
[6]
[6]
[1]
[3]]
[[3]
[6]
[6]
[1]
[1]
[6]]
[[3]
[6]
[6]
[1]
[6]
[6]
[3]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[6]]
[[6]
[3]
[1]
[1]
[6]
[1]
[6]]
[[6]
[1]
[6]
[3]
[6]
[1]
[6]
[1]]
[[6]
[3]
[6]
[1]
[6]
[1]
[1]]
[[6]
[6]
[1]
[3]
[6]
[6]
[3]
[6]]
[[6]
[1]
[3]
[6]
[3]
[1]
[3]
[6]
[6]]
[[6]
[1]
[1]
[3]
[3]
[6]
[6]
[6]
[1]]
[[1]
[6]
[1]
[3]
[6]
[6]]
[[6]
[1]
[1]
[6]
[1]
[3]
[6]
[6]]
[[1]
[6]
[1]
[3]
[1]
[6]
[6]]
[[1]
[6]
[1]
[3]
[1]
[6]
[1]]
[[6]
[1]
[1]
[3]
[1]]
[[1]
[6]
[3]
[1]
[3]
[1]
[6]
[1]]
[[1]
[6]
[3]
[6]
[6]
[1]
[3]
[6]
[1]
[1]
[1]]
[[1]
[6]
[6]
[1]
[1]
[3]
[1]
[6]
[1]]
[[1]
[6]
[6]
[1]
[1]
[6]]
[[1]
[6]
[3]
[6]
[1]
[6]
[3]
[1]
[6]
[1]
[6]]
[[1]
[1]
[6]
[6]
[3]
[3]
[1]
[6]
[6]
[1]
[6]]
[[1]
[6]
[6]
[3]
[1]
[1]
[3]
[6]]
[[6]
[1]
[3]
[1]
[6]
[6]
[1]]
[[1]
[6]
[1]
[3]
[1]
[3]
[6]]
[[1]
[6]
[3]
[3]
[1]
[1]
[6]
[6]]
[[1]
[6]
[3]
[6]
[3]
[1]
[6]
[1]
[6]]
[[1]
[6]
[3]
[6]
[6]
[1]
[1]
[1]
[6]
[3]]
[[6]
[1]
[6]
[6]
[1]
[3]
[3]
[1]]
[[6]
[1]
[6]
[6]
[3]
[3]
[1]
[6]
[1]]
[[6]
[1]
[3]
[6]
[3]
[1]
[6]
[1]
[1]
[6]
[6]]
[[6]
[3]
[6]
[1]
[6]
[3]
[1]
[1]]
[[6]
[1]
[6]
[3]
[1]
[1]
[6]
[3]]
[[3]
[6]
[1]
[3]
[6]
[1]]
[[1]
[6]
[6]
[3]
[3]
[1]
[1]]
[[6]
[1]
[6]
[3]
[3]
[1]
[1]]
[[6]
[1]
[3]
[3]
[6]
[1]
[6]]
[[6]
[1]
[6]
[3]
[3]
[6]
[6]
[1]
[1]]
[[1]
[6]
[3]
[6]
[3]
[6]
[6]
[1]
[1]]
[[1]
[3]
[3]
[6]
[6]
[1]
[6]
[6]]
[[1]
[3]
[3]
[1]
[6]
[6]
[6]
[6]]
[[1]
[1]
[3]
[3]
[6]
[6]
[6]
[6]]
[[1]
[1]
[3]
[6]
[3]
[6]
[6]
[6]
[6]]
[[1]
[1]
[3]
[6]
[6]
[3]
[6]]
[[1]
[1]
[3]
[6]
[6]
[3]
[6]]
[[3]
[1]
[1]
[3]
[6]
[6]
[6]
[6]
[1]]
[[3]
[1]
[1]
[3]
[6]
[6]
[6]
[6]]
[[3]
[1]
[6]
[3]
[6]
[6]
[1]
[6]
[6]
[6]
[6]
[1]]
[[1]
[3]
[3]
[6]
[6]
[6]
[6]
[1]
[6]
[6]
[6]
[1]]
[[1]
[3]
[1]
[6]
[6]
[3]
[6]
[1]
[6]
[6]
[6]]
[[1]
[3]
[1]
[6]
[3]
[1]
[6]
[6]
[6]
[6]]
[[3]
[1]
[6]
[1]
[6]
[3]
[6]
[6]
[6]]
[[3]
[6]
[1]
[1]
[6]
[6]
[3]]
[[3]
[6]
[1]
[6]
[3]
[6]
[1]
[6]]
[[3]
[3]
[6]
[1]
[1]
[6]
[6]
[1]
[6]
[6]]
[[3]
[3]
[6]
[1]
[6]
[6]
[1]
[6]]
[[3]
[3]
[6]
[1]
[6]
[1]
[6]]
[[3]
[6]
[6]
[1]
[3]
[6]
[6]]
[[3]
[6]
[6]
[1]
[6]
[6]
[1]]
[[3]
[6]
[6]
[6]
[6]
[1]
[3]
[1]
[6]]
[[3]
[6]
[6]
[3]
[6]]
[[3]
[6]
[1]
[6]
[3]
[1]
[1]
[6]
[6]]
[[3]
[6]
[1]
[3]
[6]
[6]
[1]
[6]
[6]]
[[3]
[6]
[6]
[1]
[6]
[3]
[1]
[6]
[6]
[6]]
[[3]
[6]
[6]
[6]
[1]
[6]
[3]
[1]
[1]
[6]]
[[3]
[6]
[1]
[6]
[6]
[3]
[1]
[6]]
[[3]
[6]
[3]
[6]
[1]
[1]
[6]
[6]
[6]
[3]
[6]]
[[1]
[3]
[6]
[6]
[3]
[6]
[6]
[6]
[6]]
[[1]
[6]
[6]
[3]
[6]
[3]
[1]]
[[1]
[3]
[6]
[6]
[3]
[6]
[1]]
[[1]
[3]
[6]
[6]
[3]
[6]
[6]]
[[1]
[3]
[6]
[6]
[6]
[3]
[6]
[6]]
[[1]
[3]
[6]
[6]
[6]
[3]
[1]
[6]
[6]]
[[1]
[3]
[6]
[1]
[6]
[6]
[6]
[3]
[6]
[6]]
[[1]
[6]
[6]
[3]
[6]
[6]
[1]
[3]
[6]
[6]
[1]
[6]]
[[1]
[3]
[6]
[6]
[6]
[3]
[1]
[6]
[6]
[6]
[1]
[6]]
[[1]
[6]
[3]
[6]
[6]
[6]
[3]
[6]
[6]
[1]]
[[6]
[6]
[1]
[3]
[6]
[6]
[3]
[6]
[1]
[1]
[6]]
[[1]
[3]
[6]
[6]
[6]
[6]
[3]
[6]
[1]
[1]
[6]
[6]]
[[1]
[6]
[6]
[3]
[6]
[1]
[3]
[6]
[6]
[1]
[6]]
[[3]
[1]
[6]
[6]
[6]
[3]
[6]
[1]
[6]
[6]]
[[3]
[1]
[6]
[6]
[6]
[6]
[3]
[6]
[1]
[1]
[6]]
[[1]
[6]
[6]
[3]
[6]
[6]
[3]
[6]
[1]
[6]]
[[3]
[6]
[1]
[6]
[6]
[6]
[3]
[1]
[6]
[6]
[1]]
[[3]
[6]
[6]
[3]
[6]
[6]
[1]
[1]
[1]]
[[3]
[3]
[6]
[6]
[6]
[6]
[1]
[1]
[6]
[3]
[1]]
[[3]
[6]
[6]
[6]
[6]
[3]
[1]
[3]
[6]
[6]]
[[3]
[6]
[6]
[3]
[6]
[6]
[1]
[3]]
[[3]
[6]
[6]
[6]
[3]
[1]
[1]
[6]]
[[3]
[6]
[6]
[6]
[1]
[6]
[3]
[1]]
[[3]
[6]
[6]
[6]
[3]
[1]]
[[3]
[6]
[6]
[1]
[6]
[3]
[1]
[6]]
[[3]
[6]
[6]
[1]
[6]
[3]
[1]]
[[6]
[3]
[3]
[6]
[1]
[6]]
[[6]
[3]
[3]
[6]
[6]
[1]]
[[3]
[6]
[3]
[6]
[6]
[1]]
[[3]
[6]
[3]
[1]
[6]
[6]]
[[3]
[6]
[3]
[6]
[1]
[6]]
[[3]
[6]
[6]
[3]
[6]
[6]]
[[3]
[6]
[6]
[3]
[6]]
[[3]
[3]
[6]
[6]]
[[3]
[6]
[3]
[6]
[6]]
[[3]
[6]
[6]
[1]
[3]
[6]
[6]]
[[3]
[3]
[1]
[6]
[6]]
[[3]
[6]
[3]
[6]
[1]]
[[3]
[3]
[6]
[1]
[6]]
[[3]
[3]
[6]
[6]
[1]]
[[3]
[1]
[6]]
[[3]
[1]
[6]]
[[3]
[1]
[6]]
[[3]
[1]
[6]
[6]]
[[3]
[1]
[3]
[6]]
[[3]
[6]]
[[3]
[6]]
[[3]
[3]]
[[3]
[1]
[3]]
[[3]
[1]]
[[3]
[1]
[1]]
[[3]
[1]
[6]
[6]
[3]]
[[3]
[3]]
[[3]
[3]]
[[3]]
[[3]
[1]]
[[3]]
[[3]]
[[3]]
[[3]]
[[3]]
[[3]]
[[3]]
[[3]
[3]]
[[3]]
[[3]
[3]
[1]]
[[3]
[3]
[3]
[2]
[1]]
[[3]
[3]
[2]
[3]
[4]
[1]
[3]]
[[3]
[3]
[4]
[3]
[3]
[3]]
[[3]
[3]
[4]
[3]
[3]
[2]
[3]
[3]]
[[3]
[3]
[4]
[3]
[3]
[3]
[1]
[4]
[1]
[3]]
[[3]
[3]
[4]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[3]
[4]
[1]
[4]
[3]
[3]
[1]]
[[3]
[3]
[3]
[3]
[1]
[1]
[3]
[4]
[4]
[3]
[3]
[1]]
[[3]
[3]
[3]
[3]
[1]
[4]
[3]
[4]
[1]
[4]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[3]
[4]
[1]
[4]
[1]
[3]
[3]
[1]
[3]
[3]
[1]
[4]]
[[3]
[3]
[3]
[3]
[4]
[1]
[1]
[3]
[1]
[4]
[3]
[3]]
[[3]
[3]
[3]
[3]
[1]
[4]
[3]
[3]
[3]
[4]]
[[3]
[3]
[3]
[3]
[3]
[3]
[3]
[4]
[3]
[3]
[1]]
[[3]
[3]
[1]
[3]
[3]
[3]
[3]
[4]
[4]
[4]
[1]
[3]
[3]]
[[3]
[3]
[3]
[4]
[3]
[1]
[3]
[3]
[3]
[2]
[3]
[4]]
[[3]
[4]
[3]
[1]
[3]
[6]
[3]
[4]]
[[4]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[6]]
[[3]
[4]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[4]
[1]
[3]
[3]
[1]
[3]]
[[3]
[3]
[3]
[1]
[4]
[3]
[3]
[1]
[3]
[3]]
[[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]]
[[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[2]
[3]
[2]
[1]]
[[3]
[3]
[1]
[3]
[3]
[3]
[3]
[2]
[3]
[3]
[3]
[2]
[3]]
[[3]
[3]
[3]
[3]
[3]
[2]
[3]
[3]
[1]
[3]
[3]
[2]
[3]]
[[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[4]
[2]
[2]
[1]]
[[3]
[3]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[1]
[2]
[3]
[3]
[1]
[2]
[3]
[3]
[1]
[3]]
[[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[1]
[1]
[3]
[3]
[1]
[3]
[3]
[3]]
[[3]
[1]
[1]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[3]
[1]
[1]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[1]
[3]]
[[3]
[1]
[3]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[3]
[3]]
[[1]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[6]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[2]
[1]]
[[3]
[3]
[3]
[3]
[1]
[1]
[3]
[3]
[3]
[3]
[1]
[3]]
[[3]
[3]
[3]
[3]
[3]
[1]
[2]
[1]
[2]
[3]
[1]
[3]
[3]
[3]
[1]
[1]]
[[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[2]
[3]
[3]
[3]
[1]
[1]]
[[3]
[3]
[1]
[1]
[3]
[3]
[6]
[3]
[3]
[1]
[2]
[1]
[3]
[3]
[1]]
[[3]
[3]
[6]
[3]
[3]
[3]
[1]
[3]
[1]
[1]
[1]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[3]
[1]
[6]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[1]
[3]]
[[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[6]
[1]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[2]
[3]
[6]
[1]
[1]
[1]]
[[3]
[3]
[1]
[3]
[3]
[1]
[3]
[6]
[1]
[3]
[3]
[6]
[2]
[1]
[3]
[1]
[1]]
[[3]
[3]
[1]
[3]
[3]
[3]
[1]
[3]
[3]
[6]
[3]
[1]
[1]
[3]
[2]]
[[3]
[3]
[1]
[3]
[3]
[3]
[3]
[6]
[3]
[1]
[1]
[3]
[3]
[6]
[1]]
[[3]
[1]
[3]
[3]
[3]
[3]
[4]
[6]
[1]
[3]
[3]
[1]
[6]
[1]
[2]]
[[3]
[1]
[3]
[3]
[1]
[3]
[3]
[6]
[3]
[1]
[1]
[4]
[3]
[1]
[2]]
[[3]
[3]
[1]
[6]
[1]
[3]
[6]
[3]
[3]
[2]
[3]
[1]
[1]
[3]
[6]
[3]]
[[3]
[3]
[6]
[6]
[3]
[1]
[1]
[2]
[3]
[3]
[3]
[1]
[1]
[3]
[6]
[1]
[6]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[1]
[1]
[3]]
[[3]
[3]
[3]
[6]
[3]
[3]
[3]
[1]
[1]
[1]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[1]
[2]
[1]
[3]
[1]
[2]
[3]
[1]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[1]
[1]
[3]
[1]
[3]
[6]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[1]
[3]
[2]
[4]
[6]]
[[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[2]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[2]
[1]
[2]
[1]
[1]
[6]
[3]
[1]
[1]
[3]
[4]]
[[3]
[1]
[2]
[6]
[3]
[3]
[3]
[3]
[1]
[1]
[1]]
[[3]
[1]
[3]
[2]
[6]
[3]
[3]
[3]
[1]
[3]
[6]
[6]
[3]]
[[3]
[1]
[3]
[3]
[2]
[6]
[6]
[3]
[3]
[3]
[1]
[6]
[3]
[4]]
[[3]
[6]
[3]
[3]
[3]
[6]
[1]
[3]
[1]
[6]
[3]
[3]]
[[3]
[1]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[1]
[6]
[1]
[3]
[3]
[6]
[3]
[1]]
[[3]
[3]
[1]
[3]
[3]
[3]
[6]
[1]
[1]
[3]
[3]
[3]
[1]
[3]]
[[3]
[1]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[1]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[6]
[1]
[6]
[2]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[1]
[3]
[3]
[3]
[2]
[6]
[6]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[1]
[3]
[6]
[3]
[2]
[3]
[3]
[1]
[3]]
[[1]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[6]
[2]
[3]
[3]
[1]
[1]
[6]]
[[1]
[3]
[3]
[3]
[3]
[3]
[6]
[1]
[2]
[3]
[6]
[1]
[3]
[3]
[3]
[1]
[1]
[1]]
[[1]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[6]
[2]
[3]
[3]
[1]
[6]
[3]
[3]
[4]]
[[3]
[1]
[3]
[3]
[3]
[2]
[3]
[3]
[1]
[1]
[6]
[3]
[1]
[3]
[3]]
[[1]
[3]
[3]
[2]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[3]
[3]
[1]]
[[1]
[2]
[3]
[3]
[3]
[3]
[3]
[1]
[1]
[3]
[4]
[3]
[6]
[1]
[3]
[3]]
[[6]
[1]
[2]
[3]
[3]
[3]
[1]
[1]
[3]
[3]
[3]
[1]
[3]
[3]
[6]]
[[6]
[1]
[3]
[2]
[3]
[3]
[4]
[1]
[3]
[3]
[3]
[1]
[3]
[1]
[3]
[1]
[1]]
[[6]
[3]
[3]
[3]
[2]
[3]
[1]
[3]
[1]
[3]
[3]
[1]
[3]
[3]
[3]
[1]
[4]
[1]
[3]]
[[3]
[6]
[2]
[3]
[1]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[1]
[3]
[6]
[3]
[1]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[2]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[2]
[1]
[1]
[3]
[1]
[1]
[3]
[3]
[2]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[2]
[2]
[1]
[3]
[3]
[3]
[1]
[3]
[1]
[1]]
[[3]
[3]
[6]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[1]]
[[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[2]
[3]
[1]
[3]]
[[1]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[6]
[3]
[3]]
[[1]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[1]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[1]]
[[1]
[3]
[6]
[3]
[3]
[3]
[4]
[3]
[3]
[3]
[1]]
[[6]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[4]
[3]
[3]
[3]]
[[1]
[3]
[6]
[3]
[3]
[4]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[1]
[3]
[3]
[3]
[3]
[6]
[3]
[4]
[3]
[6]
[3]]
[[1]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[4]
[6]]
[[6]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[1]
[6]]
[[1]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[4]
[3]
[3]]
[[3]
[3]
[1]
[3]
[3]
[6]
[4]
[3]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[6]
[3]
[1]
[3]
[4]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[1]
[3]
[4]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[4]]
[[3]
[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[6]]
[[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[6]]
[[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[1]
[3]
[6]
[3]
[3]
[1]
[8]
[3]]
[[3]
[6]
[3]
[3]
[3]
[4]
[3]
[3]]
[[3]
[6]
[3]
[3]
[8]
[4]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[4]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[3]
[4]
[3]
[1]
[3]]
[[6]
[3]
[3]
[4]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[4]
[3]]
[[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]]
[[6]
[3]
[1]
[3]
[3]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[1]
[3]
[3]
[1]
[3]
[3]]
[[6]
[3]
[3]
[4]
[1]
[1]
[3]
[3]]
[[6]
[4]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[1]]
[[6]
[4]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[4]
[1]
[6]
[3]]
[[6]
[4]
[1]
[1]
[3]
[3]
[3]
[8]
[3]
[3]
[6]
[1]
[3]]
[[4]
[1]
[6]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]
[1]
[3]
[6]]
[[4]
[1]
[3]
[3]
[6]
[1]
[3]
[3]
[3]
[3]
[6]
[3]
[1]
[6]
[3]]
[[4]
[6]
[1]
[3]
[3]
[1]
[3]
[3]
[6]
[1]
[3]]
[[4]
[6]
[1]
[3]
[3]
[1]
[3]
[3]
[1]
[8]
[3]
[6]]
[[4]
[6]
[1]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[6]
[4]
[3]]
[[4]
[6]
[3]
[3]
[1]
[6]
[2]
[3]
[1]
[3]
[6]
[3]
[3]
[1]
[8]]
[[4]
[6]
[3]
[3]
[3]
[3]
[2]
[1]
[1]
[1]
[6]
[8]
[3]]
[[6]
[4]
[3]
[1]
[3]
[1]
[3]
[8]
[2]
[3]
[1]]
[[6]
[4]
[3]
[2]
[1]
[1]
[3]
[1]
[3]
[3]
[3]
[6]]
[[6]
[1]
[4]
[3]
[1]
[3]
[1]
[2]
[3]
[1]
[3]]
[[6]
[4]
[3]
[1]
[3]
[1]
[2]
[3]
[1]
[3]
[1]
[3]]
[[6]
[4]
[3]
[1]
[3]
[3]
[3]
[6]
[1]
[4]]
[[6]
[3]
[4]
[4]
[1]
[3]
[3]
[1]
[3]
[3]
[6]]
[[6]
[3]
[3]
[1]
[3]
[6]
[3]
[3]
[4]
[1]]
[[6]
[3]
[3]
[3]
[1]
[3]
[1]
[6]
[4]]
[[6]
[3]
[1]
[3]
[3]
[6]
[4]
[1]
[3]]
[[6]
[3]
[3]
[3]
[1]
[1]
[3]
[3]]
[[6]
[1]
[3]
[3]
[3]
[4]
[3]
[3]]
[[6]
[3]
[1]
[3]
[3]]
[[6]
[3]
[3]
[3]
[1]
[3]
[3]]
[[6]
[1]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[1]
[3]
[3]
[3]
[3]
[4]
[3]]
[[6]
[1]
[3]
[3]
[3]
[4]
[3]
[3]
[3]
[4]
[1]
[4]]
[[6]
[1]
[3]
[3]
[3]
[3]
[4]
[3]
[1]]
[[6]
[1]
[3]
[3]
[4]
[3]
[1]
[3]
[3]
[2]]
[[6]
[4]
[3]
[1]
[3]
[1]
[3]
[1]
[2]
[3]]
[[6]
[1]
[3]
[3]
[4]
[1]
[3]
[3]]
[[1]
[3]
[6]
[3]
[3]
[3]
[3]]
[[3]
[6]
[1]
[3]
[3]
[3]
[3]
[1]]
[[4]
[1]
[3]
[3]
[3]
[3]
[1]
[3]
[6]]
[[4]
[6]
[1]
[3]
[3]
[1]
[3]
[3]
[1]
[3]]
[[6]
[4]
[1]
[3]
[3]
[1]
[3]
[1]]
[[4]
[6]
[1]
[3]
[1]
[3]
[3]
[1]]
[[4]
[6]
[3]
[3]
[1]
[3]
[1]]
[[6]
[4]
[4]
[3]
[1]
[3]
[3]
[1]
[3]]
[[4]
[6]
[4]
[1]
[3]
[3]
[1]
[4]]
[[6]
[4]
[4]
[1]
[1]
[3]
[4]
[1]
[1]]
[[6]
[4]
[1]
[3]
[1]
[4]
[1]
[3]
[4]]
[[6]
[4]
[1]
[1]
[3]
[4]
[1]
[3]
[3]
[3]
[1]]
[[4]
[6]
[3]
[1]
[1]
[2]
[4]
[4]
[3]
[3]]
[[4]
[6]
[3]
[1]
[1]
[3]
[4]]
[[4]
[6]
[1]
[3]
[3]
[4]
[1]
[3]]
[[4]
[6]
[1]
[4]
[3]
[4]
[3]]
[[6]
[4]
[1]
[3]
[1]
[3]
[4]
[3]]
[[6]
[4]
[1]
[3]
[3]]
[[6]
[4]
[1]
[3]
[3]
[3]]
[[6]
[1]
[4]
[1]
[3]
[3]
[3]]
[[6]
[1]
[4]
[3]
[1]
[3]
[3]]
[[6]
[1]
[4]
[3]]
[[6]
[1]
[3]
[4]
[3]
[2]
[3]]
[[6]
[4]
[1]
[3]
[3]
[3]
[3]]
[[6]
[4]
[1]
[3]
[3]
[1]
[3]
[6]
[3]
[4]]
[[6]
[4]
[1]
[1]
[3]
[3]
[3]
[4]]
[[6]
[1]
[4]
[3]
[3]
[4]
[3]
[3]
[1]
[1]
[4]]
[[6]
[1]
[3]
[4]
[3]
[3]
[1]
[4]
[3]
[6]
[3]]
[[6]
[4]
[4]
[3]
[1]
[3]
[3]
[1]
[6]
[1]]
[[6]
[3]
[4]
[3]
[1]
[4]
[1]
[6]
[3]
[4]
[3]
[1]
[4]]
[[6]
[3]
[1]
[3]
[4]
[1]
[4]]
[[6]
[1]
[3]
[4]
[1]
[3]
[3]
[4]
[4]
[4]]
[[6]
[1]
[3]
[3]
[4]
[1]
[1]
[4]
[3]
[3]]
[[6]
[1]
[3]
[1]
[3]
[3]
[1]
[3]
[4]
[4]
[4]]
[[6]
[1]
[1]
[3]
[1]
[4]
[1]
[3]
[4]
[3]
[1]
[4]
[3]]
[[6]
[1]
[3]
[1]
[1]
[3]
[3]
[4]
[4]
[3]
[3]]
[[6]
[3]
[1]
[1]
[1]
[6]
[3]
[3]
[4]
[4]
[3]
[3]
[4]]
[[6]
[1]
[1]
[3]
[3]
[3]
[6]
[1]
[4]
[3]
[4]]
[[1]
[6]
[1]
[3]
[3]
[3]
[4]
[4]
[3]
[1]]
[[1]
[1]
[3]
[6]
[3]
[3]
[3]
[4]
[4]
[8]
[3]
[6]]
[[1]
[3]
[1]
[6]
[3]
[3]
[3]
[4]
[6]]
[[1]
[1]
[6]
[3]
[3]
[3]
[3]
[4]
[3]
[1]]
[[1]
[6]
[3]
[1]
[3]
[3]
[4]
[1]
[3]
[2]
[1]
[3]
[3]
[3]]
[[1]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[1]
[8]
[3]
[4]
[1]
[3]]
[[1]
[6]
[3]
[3]
[1]
[4]
[3]
[3]
[3]
[4]
[4]
[3]]
[[3]
[1]
[6]
[3]
[1]
[3]
[4]
[1]
[3]
[4]
[4]
[3]
[8]
[3]]
[[3]
[1]
[6]
[1]
[3]
[1]
[4]
[3]
[3]
[3]
[4]
[8]
[1]
[4]
[3]
[3]]
[[3]
[1]
[1]
[6]
[4]
[3]
[3]
[3]
[1]
[1]
[3]
[3]]
[[3]
[1]
[1]
[3]
[6]
[1]
[4]
[3]
[3]
[1]
[3]]
[[3]
[1]
[1]
[3]
[3]
[6]
[1]
[4]
[3]
[1]
[3]
[4]
[3]]
[[3]
[1]
[6]
[1]
[1]
[4]
[3]
[3]
[3]
[3]
[1]
[3]
[4]]
[[3]
[1]
[1]
[1]
[6]
[1]
[4]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[1]
[1]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[4]]
[[3]
[1]
[1]
[6]
[3]
[3]
[1]
[3]
[3]
[1]
[4]
[3]
[3]]
[[3]
[1]
[6]
[3]
[3]
[1]
[1]
[3]
[3]
[3]]
[[3]
[1]
[6]
[3]
[3]
[1]
[1]
[3]
[3]
[3]
[3]
[1]]
[[3]
[1]
[1]
[3]
[6]
[3]
[3]
[1]
[3]
[3]
[8]
[3]]
[[3]
[1]
[3]
[1]
[3]
[3]
[3]
[1]
[3]
[6]
[3]
[3]
[1]]
[[3]
[1]
[3]
[3]
[3]
[3]
[1]
[1]
[4]
[1]]
[[3]
[1]
[3]
[3]
[3]
[1]
[4]
[1]
[3]
[1]
[3]
[8]]
[[3]
[1]
[1]
[1]
[3]
[3]
[3]
[3]
[8]
[1]
[3]]
[[1]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[1]
[8]
[3]
[1]
[2]]
[[1]
[3]
[3]
[1]
[3]
[3]
[3]
[1]
[2]
[8]
[3]
[1]]
[[1]
[3]
[3]
[1]
[3]
[3]
[2]
[1]
[1]
[1]
[8]
[1]
[3]
[3]
[1]
[6]]
[[3]
[3]
[1]
[3]
[1]
[2]
[3]
[1]
[1]
[6]
[1]
[4]
[1]]
[[3]
[3]
[1]
[1]
[3]
[1]
[2]
[3]
[6]
[8]
[1]
[3]]
[[3]
[3]
[6]
[1]
[3]
[3]
[1]
[2]
[3]
[8]
[1]
[3]]
[[6]
[3]
[1]
[3]
[3]
[3]
[2]
[3]
[8]
[1]
[1]
[4]
[3]]
[[3]
[1]
[3]
[3]
[6]
[4]
[3]
[8]
[2]
[3]
[1]
[1]]
[[3]
[3]
[1]
[3]
[3]
[4]
[1]
[6]
[8]
[3]
[1]]
[[3]
[3]
[1]
[3]
[3]
[1]
[8]
[6]
[4]
[1]
[2]
[3]
[3]
[3]]
[[3]
[3]
[1]
[3]
[1]
[3]
[1]
[2]
[6]
[4]
[3]
[1]
[1]
[3]
[8]
[3]]
[[3]
[3]
[1]
[3]
[1]
[1]
[3]
[3]
[1]
[1]
[4]
[6]
[2]
[1]]
[[3]
[3]
[1]
[3]
[1]
[3]
[1]
[1]
[3]
[4]
[2]
[2]
[3]
[6]
[1]
[1]
[1]]
[[3]
[1]
[3]
[3]
[1]
[1]
[3]
[1]
[3]
[3]
[1]
[1]
[3]
[2]
[4]
[6]]
[[3]
[1]
[3]
[1]
[3]
[1]
[1]
[1]
[3]
[3]
[6]
[2]
[1]
[3]
[3]
[3]]
[[3]
[1]
[2]
[3]
[3]
[1]
[1]
[3]
[1]
[3]
[6]
[1]
[3]
[1]
[3]
[3]
[3]]
[[1]
[3]
[3]
[3]
[3]
[2]
[1]
[6]
[1]
[1]
[3]
[3]
[3]
[1]
[1]]
[[1]
[3]
[3]
[3]
[2]
[3]
[6]
[3]
[1]
[1]
[3]
[1]
[3]]
[[1]
[3]
[3]
[3]
[3]
[2]
[3]
[1]
[6]
[3]
[1]
[1]
[3]]
[[1]
[3]
[3]
[3]
[6]
[3]
[3]
[2]
[1]
[3]
[1]]
[[1]
[3]
[3]
[3]
[6]
[2]
[3]
[3]
[1]
[3]
[1]
[1]]
[[3]
[6]
[1]
[3]
[3]
[3]
[3]
[1]
[2]
[3]
[1]]
[[6]
[3]
[3]
[1]
[3]
[3]
[1]
[6]
[3]]
[[6]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[6]
[1]]
[[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[1]
[3]]
[[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[1]
[8]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[1]
[1]
[3]
[8]]
[[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[1]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[4]]
[[6]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]]
[[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[1]
[3]
[3]]
[[3]
[3]
[6]
[1]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[3]
[1]]
[[3]
[3]
[1]
[3]
[6]
[1]
[3]
[3]
[3]
[1]
[3]
[2]]
[[3]
[3]
[1]
[3]
[3]
[6]
[1]
[3]
[2]
[3]
[3]
[3]]
[[3]
[1]
[3]
[6]
[3]
[3]
[3]
[2]
[3]
[3]
[3]
[3]]
[[3]
[1]
[3]
[3]
[3]
[3]
[6]
[3]
[2]
[3]
[3]]
[[3]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[1]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[1]
[3]
[3]
[3]
[6]
[3]]
[[3]
[3]
[1]
[3]
[2]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[1]
[4]
[3]
[1]
[3]]
[[3]
[3]
[3]
[1]
[3]
[3]
[3]
[1]
[4]
[1]]
[[1]
[3]
[3]
[3]
[3]
[1]
[1]
[2]
[4]
[6]
[3]]
[[3]
[3]
[1]
[6]
[3]
[1]
[3]
[2]
[1]
[1]
[6]
[4]
[3]]
[[3]
[1]
[1]
[6]
[3]
[3]
[3]
[3]
[1]
[6]
[6]
[4]]
[[3]
[6]
[3]
[3]
[1]
[1]
[1]
[3]
[1]
[4]
[6]
[6]
[3]
[3]
[6]]
[[6]
[3]
[3]
[1]
[1]
[3]
[1]
[3]
[1]]
[[3]
[1]
[6]
[3]
[1]
[6]
[3]
[1]
[3]
[2]
[1]
[3]]
[[1]
[3]
[6]
[3]
[6]
[2]
[1]
[1]
[3]
[8]
[1]
[3]
[6]
[1]]
[[1]
[8]
[1]
[6]
[3]
[1]]
[[3]
[4]
[1]
[9]]
[[3]]
[[3]]
[[2]
[3]
[1]]
[[2]
[1]
[3]
[1]
[2]]
[[1]
[1]
[3]
[1]
[2]
[1]]
[[2]
[1]
[1]
[1]
[1]]
[[1]
[1]
[2]
[1]]
[[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[2]]
[[1]
[1]
[1]
[2]
[1]
[4]]
[[1]
[1]
[1]
[1]
[4]
[2]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]
[1]
[4]
[6]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]
[4]]
[[1]
[1]
[1]
[3]
[4]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[4]
[1]]
[[1]
[3]
[1]
[1]
[1]
[4]
[1]
[1]
[6]
[3]
[8]]
[[1]
[1]
[1]
[4]
[1]
[1]
[1]
[3]]
[[1]
[4]
[1]
[1]
[1]
[3]
[1]
[8]
[1]
[1]]
[[1]
[4]
[1]
[1]
[1]
[3]
[1]
[1]
[6]
[8]]
[[1]
[1]
[1]
[1]
[4]
[1]
[1]
[3]
[6]]
[[1]
[1]
[1]
[1]
[6]
[4]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]
[1]
[4]
[1]]
[[1]
[1]
[1]
[1]
[1]
[4]
[3]
[2]
[1]
[4]]
[[1]
[1]
[1]
[4]
[1]
[1]]
[[1]
[1]
[4]
[6]
[3]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[4]
[1]
[1]
[6]
[3]
[1]
[2]]
[[1]
[1]
[3]
[1]
[1]
[3]
[1]
[4]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[1]
[1]
[6]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[6]
[1]
[3]
[1]
[1]
[6]
[8]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[6]
[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[6]
[3]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[6]
[6]
[1]]
[[1]
[1]
[1]
[3]
[6]
[1]
[6]]
[[1]
[1]
[3]
[1]
[6]
[6]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[4]
[1]
[1]]
[[1]
[1]
[6]
[1]
[3]
[3]
[4]
[6]
[6]]
[[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[4]]
[[1]
[1]
[3]
[4]
[1]]
[[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]]
[[3]
[1]
[6]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[6]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[4]
[1]]
[[1]
[1]
[1]
[1]
[6]]
[[3]
[1]
[1]
[1]
[3]
[1]
[1]
[6]]
[[1]
[3]
[1]
[1]
[6]]
[[1]
[3]
[1]
[1]
[1]
[6]
[1]
[1]]
[[1]
[6]
[3]
[1]
[1]
[1]
[1]]
[[1]
[6]
[3]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[6]
[1]]
[[1]
[3]
[1]
[1]
[1]
[6]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]]
[[3]
[1]
[1]
[1]
[1]
[1]
[6]]
[[3]
[1]
[1]
[1]
[1]
[6]
[1]]
[[3]
[1]
[1]
[1]
[6]
[1]
[4]
[1]
[1]
[1]
[8]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[3]
[1]
[1]
[6]
[1]
[1]
[6]
[1]]
[[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[8]
[6]]
[[1]
[3]
[1]
[6]
[1]
[6]
[1]
[1]
[8]]
[[1]
[3]
[1]
[6]
[8]
[1]
[1]
[4]
[1]]
[[1]
[3]
[1]
[6]
[1]
[4]
[1]
[3]]
[[3]
[1]
[3]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[1]
[3]
[6]
[1]
[4]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[4]
[1]
[3]]
[[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]]
[[3]
[1]
[1]
[1]
[8]
[3]]
[[3]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[1]
[1]
[1]
[4]
[1]]
[[3]
[1]
[1]
[1]
[1]
[1]
[1]
[4]
[1]]
[[3]
[1]
[1]
[1]
[1]
[4]
[1]
[3]
[1]
[1]
[4]]
[[3]
[1]
[1]
[4]
[1]
[1]
[1]]
[[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[8]
[1]
[4]
[6]
[1]
[4]
[1]]
[[8]
[1]
[6]
[1]
[1]
[1]
[3]
[8]
[1]]
[[8]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[4]]
[[1]
[1]
[3]
[8]
[6]
[1]
[1]
[1]
[4]
[1]
[1]]
[[1]
[1]
[1]
[1]
[6]
[1]
[3]
[4]
[8]
[1]]
[[1]
[1]
[6]
[1]
[1]
[1]
[8]
[1]
[8]
[1]
[1]
[6]
[3]
[4]]
[[1]
[8]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[8]
[8]
[6]]
[[1]
[8]
[8]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[4]
[4]]
[[1]
[1]
[8]
[8]
[1]
[1]
[6]
[8]
[3]
[1]
[1]
[4]]
[[1]
[1]
[8]
[1]
[1]
[8]
[8]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[4]
[1]
[8]
[1]
[4]]
[[1]
[1]
[1]
[1]
[1]
[8]
[1]
[8]
[1]]
[[1]
[1]
[1]
[8]
[4]
[1]
[1]
[8]
[1]
[1]
[1]
[3]]
[[4]
[1]
[1]
[1]
[1]
[3]
[1]]
[[1]
[4]
[1]
[1]
[1]
[1]
[8]
[3]]
[[1]
[1]
[1]
[8]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[8]
[6]
[1]
[3]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[4]
[3]
[8]
[1]
[1]
[6]
[8]
[8]
[3]
[6]
[1]]
[[1]
[3]
[4]
[1]
[1]
[1]
[8]
[1]
[1]
[1]
[1]
[6]
[6]
[3]]
[[1]
[3]
[1]
[4]
[1]
[1]
[1]
[8]
[1]
[3]
[1]
[1]]
[[3]
[1]
[4]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[6]]
[[4]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[4]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]
[1]
[4]
[1]
[3]]
[[4]
[3]
[1]
[1]
[1]
[1]
[4]
[1]
[1]
[4]
[6]
[1]
[1]
[4]]
[[4]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[4]
[1]
[4]
[1]
[6]
[4]
[1]
[8]
[1]]
[[4]
[3]
[4]
[1]
[1]
[1]
[1]
[1]
[6]
[4]
[1]
[1]
[1]]
[[4]
[4]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[4]
[1]]
[[4]
[4]
[1]
[1]
[1]
[3]
[1]
[1]
[3]
[1]
[4]
[1]
[1]
[1]]
[[4]
[1]
[4]
[1]
[3]
[1]
[1]
[1]
[3]
[1]]
[[4]
[3]
[1]
[1]
[1]
[4]
[1]
[1]
[1]
[4]]
[[3]
[4]
[1]
[1]
[1]
[1]
[1]
[4]
[1]]
[[4]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[4]]
[[4]
[3]
[1]
[1]
[1]
[1]
[4]
[1]
[1]
[1]]
[[4]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[4]
[1]]
[[3]
[1]
[1]
[1]
[4]
[1]
[1]
[1]
[6]
[1]]
[[3]
[1]
[1]
[1]
[4]
[1]
[1]
[1]
[3]
[1]]
[[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[3]
[1]
[4]
[4]]
[[1]
[4]
[1]
[1]
[3]
[1]
[4]
[1]
[3]
[1]
[1]
[3]]
[[1]
[3]
[1]
[1]
[1]
[4]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]
[3]
[4]
[1]
[4]
[3]
[1]]
[[3]
[4]
[1]
[4]
[1]
[1]
[1]
[3]
[1]
[1]
[6]
[1]
[3]]
[[1]
[4]
[4]
[1]
[1]
[3]
[1]
[6]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[4]
[3]
[1]]
[[1]
[1]
[1]
[1]
[3]
[3]
[4]
[6]
[6]
[1]
[1]
[4]]
[[1]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[4]
[1]]
[[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[3]
[1]
[1]
[1]
[1]
[3]]
[[3]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[1]
[6]
[3]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[4]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[6]]
[[1]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]]
[[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]]
[[1]
[1]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[8]
[6]]
[[1]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]]
[[1]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[8]]
[[1]
[3]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[1]
[6]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[6]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[1]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[4]
[3]
[1]
[1]
[6]
[1]]
[[6]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[1]
[1]
[1]]
[[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[3]
[1]
[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[8]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[3]
[8]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]
[6]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]
[6]
[1]
[8]
[6]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[3]
[1]
[3]
[1]
[1]]
[[3]
[6]
[1]
[1]
[1]
[1]
[6]
[3]
[1]
[3]
[1]
[3]
[1]
[1]
[6]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[3]
[1]
[6]
[3]
[6]]
[[3]
[1]
[6]
[1]
[3]
[1]
[1]
[3]
[1]
[1]
[6]
[6]
[6]]
[[3]
[1]
[6]
[3]
[1]
[3]
[1]
[1]
[1]
[6]
[1]
[6]
[1]
[1]
[3]
[8]]
[[1]
[6]
[3]
[3]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[1]
[6]
[3]
[3]
[1]
[1]
[1]
[6]
[6]
[3]
[1]
[6]
[1]
[1]]
[[6]
[1]
[3]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[3]
[1]
[3]
[6]]
[[6]
[1]
[3]
[1]
[1]
[3]
[1]
[1]
[6]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[3]
[1]
[6]
[3]
[1]
[6]
[1]]
[[6]
[1]
[1]
[3]
[1]
[1]
[3]
[3]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[3]
[3]
[1]
[1]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[6]
[1]
[3]
[3]
[1]
[6]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[6]
[1]
[6]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[6]
[6]
[1]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[6]
[6]
[1]
[3]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[3]
[1]
[1]
[1]]
[[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[3]
[1]
[3]
[3]
[3]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[6]
[1]
[3]
[1]
[3]
[6]]
[[6]
[1]
[1]
[1]
[6]
[3]
[1]
[1]
[1]
[1]]
[[1]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[6]
[1]]
[[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[3]
[6]
[3]
[1]]
[[1]
[1]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[3]
[3]
[6]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]
[6]
[6]
[1]
[1]
[6]
[3]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]
[3]
[1]
[6]
[8]]
[[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]
[3]
[8]
[3]
[8]]
[[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[3]
[6]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[1]
[6]
[6]
[1]
[8]
[3]
[1]
[6]
[1]]
[[1]
[1]
[1]
[1]
[6]
[8]
[3]
[1]
[6]
[1]
[3]
[1]
[1]
[1]]
[[1]
[1]
[6]
[8]
[6]
[1]
[1]
[1]
[3]
[6]
[3]
[6]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]
[8]
[1]
[9]
[6]
[6]
[1]
[6]
[1]]
[[1]
[1]
[9]
[1]
[1]
[3]
[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[9]
[1]
[6]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]
[9]
[1]
[6]
[6]
[1]]
[[1]
[1]
[1]
[1]
[1]
[8]
[9]]
[[1]
[1]
[1]
[3]
[8]
[1]
[6]
[1]
[1]]
[[1]
[1]
[1]
[6]
[6]
[1]
[6]
[1]
[1]
[3]
[3]
[8]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]
[3]
[1]
[8]]
[[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[3]]
[[1]
[1]
[1]
[3]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[6]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[1]
[1]
[6]
[1]
[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[1]
[1]
[8]
[1]]
[[1]
[1]
[1]
[1]
[1]
[4]
[3]
[8]]
[[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[3]
[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]]
[[3]
[6]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[6]
[1]
[1]]
[[3]
[6]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[6]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[6]
[1]
[6]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]
[6]]
[[6]
[3]
[1]
[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[6]
[6]
[6]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[6]
[1]
[1]]
[[6]
[3]
[1]
[6]
[6]
[1]
[6]
[1]]
[[6]
[1]
[3]
[6]
[6]
[1]]
[[6]
[3]
[1]
[6]
[1]
[6]
[6]
[6]]
[[6]
[3]
[1]
[1]
[6]
[6]
[1]
[6]]
[[6]
[6]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[6]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[6]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[6]
[1]]
[[6]
[6]
[1]
[1]
[6]
[6]
[1]
[6]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[6]
[3]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[6]
[3]
[1]]
[[6]
[6]
[1]
[1]
[6]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]
[6]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[3]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[6]]
[[6]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[6]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[3]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[6]
[6]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[6]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[7]
[1]]
[[6]
[6]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[6]
[1]
[7]
[1]
[1]]
[[6]
[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[7]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[6]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[6]
[3]]
[[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[7]]
[[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[7]]
[[6]
[6]
[1]
[1]
[7]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[6]
[6]
[1]
[1]
[1]
[1]
[7]
[1]
[1]]
[[6]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[6]
[1]
[7]
[1]
[1]
[1]]
[[6]
[6]
[1]
[6]
[1]
[1]
[7]
[1]
[1]]
[[6]
[6]
[1]
[1]
[6]
[1]
[7]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[7]
[1]
[6]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[1]
[7]
[1]]
[[6]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[6]
[3]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[6]
[1]
[3]
[1]]
[[6]
[1]
[1]
[6]
[1]
[3]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[6]
[1]
[8]
[1]]
[[6]
[6]
[1]
[3]
[1]
[8]
[6]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[3]
[1]
[1]
[8]
[6]]
[[6]
[6]
[3]
[1]
[8]
[1]
[1]
[6]]
[[6]
[6]
[1]
[3]
[6]
[8]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[8]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[3]
[1]
[3]
[1]
[6]
[8]
[1]]
[[6]
[6]
[3]
[1]
[1]
[8]
[6]
[1]]
[[6]
[6]
[3]
[8]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[8]
[1]
[6]
[1]]
[[6]
[6]
[3]
[1]
[6]
[6]
[1]]
[[6]
[6]
[3]
[1]
[6]
[6]
[1]
[1]]
[[6]
[6]
[1]
[3]
[8]
[1]
[6]]
[[6]
[6]
[1]
[8]
[3]
[6]
[1]
[6]
[3]
[1]]
[[6]
[8]
[6]
[3]
[1]
[1]
[3]
[6]
[6]
[1]]
[[6]
[8]
[3]
[1]
[6]
[3]
[1]
[1]
[1]
[6]
[1]]
[[6]
[8]
[3]
[1]
[6]
[3]
[1]
[1]
[6]
[1]]
[[6]
[8]
[3]
[1]
[6]
[3]
[1]
[1]
[6]
[1]]
[[6]
[8]
[6]
[3]
[1]
[3]
[1]
[6]
[1]
[6]]
[[6]
[8]
[3]
[1]
[6]
[1]
[6]
[1]
[3]
[1]
[6]
[1]]
[[6]
[8]
[6]
[3]
[1]
[3]
[6]
[1]
[1]
[6]
[1]]
[[6]
[8]
[6]
[3]
[1]
[3]
[6]
[1]
[6]
[1]
[1]]
[[6]
[8]
[6]
[3]
[1]
[6]
[3]
[6]
[1]
[1]]
[[6]
[8]
[6]
[3]
[1]
[1]
[6]
[1]
[1]
[6]
[3]]
[[6]
[8]
[6]
[3]
[1]
[6]
[1]
[1]
[1]
[6]
[1]
[3]]
[[6]
[8]
[6]
[3]
[6]
[1]
[3]
[1]
[1]
[1]
[6]
[1]]
[[6]
[8]
[3]
[1]
[3]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[8]
[3]
[1]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[8]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[6]]
[[6]
[8]
[3]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[1]
[1]]
[[6]
[8]
[3]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[3]]
[[6]
[8]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[6]]
[[6]
[8]
[6]
[3]
[3]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[8]
[3]
[3]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[3]
[3]
[1]]
[[6]
[8]
[6]
[3]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[8]
[6]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[3]
[3]]
[[6]
[8]
[6]
[3]
[3]
[1]
[1]
[1]
[1]
[3]
[3]
[1]
[6]
[1]
[1]]
[[6]
[8]
[6]
[3]
[3]
[1]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[8]
[6]
[3]
[3]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[3]
[3]
[3]
[8]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[3]
[8]
[3]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[3]
[6]
[3]
[8]
[1]
[3]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[8]
[1]
[3]
[3]
[3]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[6]
[1]
[8]
[3]
[1]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[8]
[3]
[1]
[3]
[1]
[3]
[1]
[1]
[6]
[1]
[6]
[1]]
[[6]
[6]
[3]
[1]
[8]
[3]
[3]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[6]
[3]
[1]
[3]
[8]
[3]
[1]
[3]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[3]
[6]
[8]
[1]
[3]
[3]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[8]
[6]
[3]
[1]
[3]
[6]
[1]
[1]
[6]
[1]
[3]
[1]
[3]]
[[6]
[3]
[3]
[8]
[3]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[8]
[6]
[3]
[3]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[6]
[3]
[8]
[3]
[1]
[1]
[6]
[1]
[1]]
[[6]
[8]
[3]
[6]
[3]
[3]
[1]
[1]
[6]
[1]]
[[6]
[3]
[8]
[3]
[3]
[6]
[1]
[6]
[1]
[1]
[1]
[6]]
[[6]
[3]
[8]
[3]
[3]
[6]
[1]
[6]
[1]
[1]
[3]
[1]
[6]]
[[6]
[8]
[3]
[3]
[3]
[6]
[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[8]
[3]
[3]
[3]
[6]
[1]
[6]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[8]
[3]
[3]
[3]
[6]
[1]
[6]
[1]
[1]
[3]
[1]
[1]]
[[8]
[6]
[3]
[3]
[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[8]
[3]
[3]
[1]
[6]
[6]
[1]
[1]
[1]
[1]]
[[3]
[6]
[3]
[3]
[1]
[8]
[6]
[1]
[1]
[3]
[1]
[1]
[1]
[6]
[1]]
[[3]
[6]
[3]
[1]
[8]
[3]
[6]
[1]
[1]
[3]
[1]
[1]
[1]
[6]]
[[3]
[6]
[3]
[8]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[3]
[8]
[6]
[1]
[3]
[1]
[1]
[1]
[3]
[1]
[6]
[6]]
[[3]
[3]
[6]
[8]
[1]
[1]
[6]
[3]
[1]
[1]
[1]
[6]]
[[3]
[6]
[1]
[8]
[3]
[6]
[3]
[1]
[1]
[6]
[1]
[1]]
[[3]
[1]
[8]
[6]
[3]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[6]]
[[3]
[3]
[8]
[6]
[1]
[3]
[1]
[6]
[1]
[6]
[1]
[1]]
[[3]
[6]
[3]
[1]
[1]
[8]
[3]
[1]
[6]
[6]
[1]
[1]
[1]]
[[3]
[3]
[6]
[8]
[1]
[1]
[1]
[3]
[6]
[6]
[1]
[1]
[4]
[1]
[1]]
[[3]
[8]
[3]
[6]
[1]
[1]
[1]
[6]
[3]
[4]
[1]
[1]]
[[3]
[1]
[8]
[3]
[1]
[1]
[6]
[6]
[1]
[1]
[3]]
[[3]
[6]
[1]
[1]
[3]
[8]
[1]
[6]
[1]
[3]
[1]]
[[3]
[6]
[1]
[1]
[3]
[8]
[1]
[6]
[1]
[3]
[1]
[6]]
[[3]
[6]
[1]
[8]
[3]
[1]
[1]
[6]
[3]
[1]
[1]
[6]
[1]
[3]]
[[3]
[6]
[1]
[3]
[8]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]
[8]
[1]
[1]
[6]
[6]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]
[8]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]
[1]
[8]
[1]
[6]
[6]
[1]
[1]
[3]]
[[3]
[6]
[1]
[3]
[1]
[8]
[6]
[1]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]
[1]
[8]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[3]]
[[3]
[6]
[1]
[3]
[8]
[1]
[1]
[1]
[6]
[1]
[1]
[6]]
[[3]
[6]
[3]
[1]
[8]
[1]
[1]
[6]
[1]
[1]
[3]
[1]
[3]
[6]
[1]]
[[3]
[6]
[3]
[1]
[8]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[3]]
[[3]
[6]
[3]
[1]
[8]
[1]
[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[3]
[6]
[3]
[1]
[8]
[6]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]
[8]
[6]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[3]
[6]
[3]
[1]
[8]
[6]
[1]
[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[3]
[6]
[1]
[8]
[6]
[3]
[6]
[1]
[1]
[1]
[3]
[6]
[1]
[1]]
[[3]
[6]
[6]
[1]
[8]
[3]
[1]
[1]
[1]
[6]
[1]]
[[3]
[6]
[6]
[1]
[8]
[3]
[1]
[1]
[1]
[1]
[6]]
[[3]
[6]
[3]
[6]
[6]
[1]
[1]
[8]
[1]
[1]
[1]]
[[3]
[6]
[6]
[6]
[3]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[6]
[1]
[6]
[1]
[1]
[1]
[6]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[6]
[1]]
[[1]
[3]
[1]
[6]
[1]
[3]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[3]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[6]
[3]]
[[6]
[1]
[1]
[1]
[8]
[1]
[1]
[1]]
[[8]
[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]
[8]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[8]
[1]]
[[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]
[1]
[6]
[8]
[4]
[6]]
[[6]
[3]
[6]
[1]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[4]
[8]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[8]
[1]
[6]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[8]
[1]]
[[6]
[1]
[6]
[3]
[6]
[1]
[1]
[1]
[1]
[8]
[1]
[1]]
[[6]
[1]
[3]
[1]
[6]
[6]
[1]
[1]
[1]
[8]
[8]
[1]]
[[6]
[1]
[6]
[3]
[1]
[1]
[6]
[8]
[8]
[6]
[1]
[1]]
[[6]
[1]
[3]
[1]
[6]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[1]
[6]
[3]
[1]
[1]
[1]
[8]
[1]
[3]
[6]
[1]
[1]
[1]]
[[6]
[1]
[8]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[1]
[8]
[1]
[1]
[1]
[1]
[1]
[8]]
[[6]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[8]
[1]
[8]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[8]
[8]
[8]
[6]
[4]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[8]
[8]
[6]
[1]
[3]
[1]
[1]
[8]
[1]
[1]
[6]
[1]]
[[6]
[1]
[8]
[8]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]
[4]
[1]
[1]
[8]]
[[6]
[1]
[1]
[1]
[1]
[1]
[8]
[1]
[1]
[1]
[8]
[6]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[8]
[3]
[1]
[1]
[1]
[3]
[8]
[6]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[8]
[1]
[1]
[6]
[1]
[3]
[8]
[1]]
[[6]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[8]
[1]
[8]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]
[1]]
[[6]
[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[6]
[1]]
[[6]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]
[1]
[6]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[1]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[6]]
[[6]
[6]
[1]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[6]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]
[1]
[6]]
[[6]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[1]
[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]]
[[6]
[3]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[6]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[6]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[6]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]]
[[6]
[1]
[3]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[6]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]]
[[6]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[6]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[6]
[1]
[6]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[3]
[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[6]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]
[6]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[6]
[3]
[6]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[8]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[6]
[1]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[6]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[6]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[6]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[6]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[1]
[3]
[6]
[1]]
[[6]
[1]
[1]
[1]
[1]
[1]
[6]]
[[6]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[6]]
[[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[4]
[1]
[1]
[1]]
[[6]
[1]
[1]
[4]
[1]
[1]
[3]
[1]
[1]
[1]
[6]
[8]]
[[6]
[1]
[1]
[1]
[4]
[6]
[3]
[1]
[1]
[1]]
[[6]
[1]
[1]
[1]
[4]
[1]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[1]
[4]
[3]
[1]
[1]]
[[6]
[1]
[1]
[1]
[4]
[1]
[6]
[3]
[8]
[1]
[3]]
[[6]
[1]
[1]
[4]
[1]
[1]
[3]
[8]]
[[6]
[1]
[1]
[3]
[4]
[1]]
[[6]
[1]
[1]
[3]
[4]
[8]
[1]
[1]]
[[6]
[1]
[1]
[1]
[1]
[3]
[4]
[8]
[1]]
[[6]
[1]
[1]
[3]
[1]
[4]
[1]]
[[6]
[1]
[1]
[3]
[1]
[4]
[1]
[1]]
[[6]
[1]
[3]
[1]
[6]
[1]
[8]]
[[6]
[1]
[3]
[1]
[6]
[1]
[3]
[1]
[1]
[1]]
[[6]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[3]
[1]
[1]
[1]
[6]
[1]]
[[6]
[1]
[3]
[1]
[8]
[1]
[1]
[1]
[6]
[8]
[3]]
[[6]
[1]
[3]
[1]
[6]
[8]
[1]
[6]
[1]]
[[6]
[3]
[1]
[1]
[6]
[8]
[1]
[1]
[1]]
[[6]
[6]
[1]
[3]
[8]
[1]
[8]
[1]
[6]
[1]]
[[6]
[1]
[3]
[1]
[6]
[8]
[8]
[1]
[1]]
[[6]
[1]
[3]
[1]
[8]
[8]
[1]
[1]
[6]
[1]]
[[6]
[1]
[1]
[8]
[8]
[1]
[8]
[3]]
[[6]
[1]
[1]
[8]
[8]
[8]
[1]]
[[6]
[1]
[1]
[3]
[8]
[1]]
[[6]
[1]
[3]
[4]
[1]
[8]
[1]]
[[6]
[1]
[3]
[1]
[8]]
[[6]
[1]
[1]
[3]
[1]
[8]
[6]
[1]
[1]]
[[6]
[1]
[1]
[8]
[1]
[3]
[1]]
[[6]
[1]
[1]
[8]
[8]
[6]
[1]
[3]
[1]]
[[6]
[8]
[1]
[1]
[8]
[1]
[6]
[6]
[1]
[1]]
[[6]
[1]
[1]
[8]
[8]
[1]
[1]
[6]]
[[6]
[1]
[1]
[8]
[8]
[1]
[1]
[1]]
[[6]
[8]
[1]
[1]
[1]
[1]
[8]
[1]
[1]
[6]]
[[6]
[1]
[8]
[1]
[8]
[1]
[1]
[6]
[1]
[1]]
[[6]
[1]
[8]
[8]
[1]
[1]
[1]
[1]
[3]
[6]
[1]]
[[6]
[1]
[8]
[1]
[1]
[3]
[1]
[6]
[8]
[1]
[1]]
[[6]
[1]
[8]
[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[8]
[1]
[1]
[8]
[1]
[1]
[3]
[6]
[1]
[1]]
[[6]
[1]
[8]
[1]
[1]
[1]
[8]
[1]
[3]
[1]
[6]]
[[6]
[1]
[8]
[1]
[1]
[8]
[6]
[1]
[3]
[1]
[1]
[1]]
[[6]
[1]
[3]
[8]
[1]
[8]
[1]
[1]
[1]]
[[6]
[1]
[3]
[8]
[8]
[1]
[1]
[1]
[1]]
[[6]
[1]
[3]
[1]]
[[6]
[1]
[1]
[1]
[3]
[4]
[1]]
[[6]]
[[6]]
()
()
()
()
()
[[7]
[1]]
[[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[7]]
[[7]
[1]
[3]
[1]
[1]
[7]
[3]
[6]]
[[7]
[1]
[3]
[1]
[7]
[3]
[3]]
[[7]
[3]
[8]
[6]
[1]
[7]
[3]
[1]
[3]
[1]
[3]]
[[6]
[6]
[3]
[3]
[8]
[1]
[1]
[7]
[1]
[3]]
[[6]
[3]
[6]
[3]
[8]
[3]
[7]
[3]
[1]
[1]
[1]]
[[6]
[6]
[3]
[8]
[1]
[7]
[1]
[3]
[1]
[3]]
[[6]
[6]
[3]
[1]
[7]
[3]
[1]
[8]
[1]
[1]
[3]
[1]]
[[6]
[1]
[6]
[3]
[1]
[3]
[3]
[8]
[1]
[1]]
[[6]
[7]
[1]
[3]
[1]
[3]
[1]
[3]
[1]
[6]
[1]]
[[7]
[6]
[3]
[1]
[1]
[3]
[3]
[1]
[6]
[1]
[8]]
[[6]
[1]
[7]
[3]
[8]
[3]
[1]
[3]
[1]
[1]]
[[6]
[7]
[3]
[8]
[3]
[1]
[1]
[3]
[6]
[1]
[1]]
[[6]
[7]
[3]
[3]
[1]
[1]
[8]
[6]
[1]
[3]]
[[6]
[6]
[3]
[3]
[1]
[1]
[8]
[1]
[3]]
[[6]
[3]
[6]
[3]
[1]
[1]
[1]]
[[6]
[6]
[3]
[3]
[6]
[1]
[1]]
[[6]
[6]
[3]
[3]
[1]
[6]
[8]]
[[6]
[6]
[3]
[3]
[3]
[1]
[8]]
[[6]
[3]
[6]
[6]
[3]
[3]
[1]]
[[6]
[3]
[6]
[3]
[6]
[1]
[3]
[1]]
[[6]
[6]
[3]
[3]
[3]
[3]
[8]
[1]
[1]
[1]]
[[6]
[6]
[3]
[3]
[3]
[8]
[1]
[1]
[1]
[1]]
[[6]
[3]
[3]
[6]
[3]
[1]
[8]
[6]
[1]]
[[6]
[3]
[3]
[6]
[1]
[1]
[1]
[8]
[1]
[3]]
[[6]
[3]
[3]
[6]
[1]
[1]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[1]
[1]
[3]
[3]
[8]
[1]
[1]
[3]]
[[6]
[3]
[6]
[1]
[6]
[1]
[8]
[6]
[1]
[3]]
[[3]
[6]
[6]
[1]
[6]
[3]
[3]
[8]
[1]]
[[3]
[6]
[1]
[6]
[6]
[8]
[3]
[6]
[1]]
[[6]
[3]
[6]
[1]
[3]
[3]
[8]
[3]
[1]
[6]
[3]]
[[3]
[6]
[6]
[1]
[3]
[6]
[1]
[1]
[8]
[3]]
[[6]
[6]
[1]
[3]
[3]
[3]
[3]
[1]
[3]]
[[6]
[6]
[3]
[3]
[1]
[1]
[1]
[3]]
[[3]
[6]
[6]
[3]
[1]
[3]
[1]
[1]
[3]
[1]]
[[6]
[6]
[3]
[3]
[1]
[3]
[1]
[1]
[3]]
[[6]
[6]
[3]
[3]
[1]
[8]
[1]
[3]
[1]
[1]
[3]]
[[6]
[3]
[3]
[1]
[8]
[1]
[6]
[1]
[1]]
[[6]
[3]
[6]
[3]
[1]
[1]
[3]
[1]]
[[3]
[6]
[3]
[1]
[6]
[1]
[1]
[1]]
[[3]
[3]
[6]
[6]
[1]
[1]
[3]
[1]
[1]
[8]]
[[3]
[3]
[6]
[6]
[1]
[1]
[1]
[1]
[8]]
[[3]
[6]
[3]
[8]
[3]
[6]
[1]
[1]
[7]
[1]]
[[3]
[6]
[1]
[8]
[6]
[7]
[1]
[3]]
[[3]
[1]
[8]
[1]
[6]
[3]
[3]]
[[3]
[1]
[8]
[6]
[1]
[1]
[6]
[1]
[3]]
[[3]
[1]
[1]
[6]
[6]
[8]
[3]
[3]]
[[3]
[6]
[1]
[6]
[1]
[1]
[8]
[3]]
[[1]
[6]
[1]
[3]
[6]
[3]
[1]
[8]
[1]]
[[3]
[1]
[6]
[1]
[1]
[1]
[1]
[1]
[6]
[3]]
[[3]
[1]
[3]
[6]
[6]
[1]
[8]
[1]
[1]
[1]]
[[1]
[6]
[3]
[1]
[3]
[8]
[1]
[1]]
[[1]
[6]
[8]
[1]
[3]
[3]
[1]
[6]
[1]
[3]
[3]]
[[1]
[6]
[1]
[6]
[8]
[3]
[1]
[3]
[3]
[1]
[3]]
[[1]
[6]
[6]
[3]
[1]
[1]
[3]
[3]
[8]
[3]
[1]
[1]]
[[6]
[1]
[6]
[1]
[8]
[1]
[3]
[3]
[3]
[3]
[3]]
[[1]
[6]
[1]
[6]
[3]
[3]
[3]
[8]
[1]
[3]]
[[3]
[1]
[1]
[6]
[3]
[3]
[6]
[1]
[3]
[8]
[3]
[3]]
[[6]
[3]
[1]
[1]
[3]
[3]
[1]
[6]
[3]
[8]]
[[6]
[3]
[6]
[1]
[1]
[3]
[3]
[1]
[8]]
[[6]
[3]
[6]
[1]
[3]
[1]
[3]
[1]
[3]
[3]
[3]
[8]
[1]
[3]]
[[6]
[6]
[3]
[1]
[1]
[3]
[3]
[3]
[1]
[1]
[3]]
[[6]
[6]
[3]
[1]
[1]
[1]
[3]
[3]
[3]
[8]
[1]
[3]]
[[6]
[6]
[3]
[3]
[1]
[3]
[1]
[3]
[3]
[1]
[1]
[8]
[1]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[1]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[1]
[3]]
[[1]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[3]]
[[6]
[6]
[1]
[3]
[3]
[3]
[3]
[3]
[8]
[3]]
[[6]
[6]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[1]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[6]
[3]
[1]
[3]
[3]
[3]
[1]
[3]]
[[3]
[6]
[1]
[6]
[3]
[3]
[3]
[3]
[3]
[8]
[1]]
[[6]
[1]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[1]
[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[1]
[6]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[1]
[1]
[3]
[8]]
[[6]
[3]
[6]
[1]
[3]
[1]
[3]
[3]
[3]]
[[3]
[6]
[6]
[3]
[3]
[1]
[1]]
[[6]
[3]
[6]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[1]
[3]
[1]]
[[3]
[6]
[6]
[3]
[1]
[3]
[3]
[8]]
[[3]
[6]
[3]
[6]
[1]
[3]
[3]
[3]]
[[6]
[3]
[3]
[6]
[1]
[3]
[3]]
[[3]
[6]
[3]
[6]
[3]
[3]
[1]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[6]
[3]
[8]
[3]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[8]
[3]
[3]]
[[6]
[6]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[6]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[6]
[3]]
[[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]
[8]
[3]
[6]
[3]]
[[6]
[3]
[1]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[8]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[8]]
[[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[8]
[8]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[3]
[3]
[8]
[3]
[3]
[8]
[3]
[6]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[8]]
[[3]
[6]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[8]
[8]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[6]
[1]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[6]]
[[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]]
[[3]
[6]
[3]
[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[6]
[3]
[1]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[6]
[1]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[8]
[3]
[8]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[6]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[3]]
[[3]
[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]]
[[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[3]
[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[3]
[1]]
[[3]
[3]
[3]
[6]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[6]]
[[3]
[3]
[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[1]
[3]
[3]
[3]
[1]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[8]
[3]
[3]]
[[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[1]
[8]
[3]
[3]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]]
[[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]
[3]
[3]
[3]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[6]
[1]
[3]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[6]
[3]
[1]
[3]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[1]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]
[1]
[8]]
[[3]
[3]
[6]
[6]
[3]
[1]
[3]
[3]
[3]
[3]
[3]
[8]
[3]
[3]
[1]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[8]
[3]
[3]
[1]
[3]
[8]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[8]
[3]
[3]
[3]
[3]
[3]
[1]]
[[3]
[3]
[6]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[8]
[3]]
[[3]
[6]
[3]
[6]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[8]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[8]
[3]
[3]]
[[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[8]]
[[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[1]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[1]
[3]
[1]
[6]
[8]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[1]
[3]
[8]
[3]
[3]
[3]
[1]]
[[6]
[3]
[3]
[3]
[6]
[6]
[1]
[3]
[1]
[8]
[3]
[3]
[8]
[3]
[3]]
[[6]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[1]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[8]
[3]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[8]]
[[6]
[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]]
[[6]
[6]
[3]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]
[6]
[3]
[3]
[1]
[1]
[3]
[3]]
[[6]
[3]
[6]
[6]
[3]
[3]
[3]
[3]
[3]
[1]
[1]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]
[3]
[6]
[1]
[3]
[3]]
[[6]
[3]
[6]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[6]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[6]
[6]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[6]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[6]
[3]
[8]]
[[6]
[3]
[3]
[3]
[3]
[3]
[6]
[6]
[3]
[3]
[8]
[8]]
[[6]
[3]
[3]
[6]
[3]
[6]
[8]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[3]
[6]
[3]
[3]
[6]
[3]
[8]]
[[6]
[3]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[6]
[3]
[8]
[3]
[6]
[8]]
[[6]
[3]
[3]
[3]
[3]
[3]
[6]
[6]]
[[6]
[3]
[3]
[6]
[6]
[3]
[3]
[6]
[8]
[8]]
[[6]
[3]
[3]
[6]
[6]
[6]
[3]
[8]
[3]]
[[6]
[3]
[6]
[3]
[8]
[3]
[3]
[1]
[3]
[1]]
[[6]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[8]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[8]
[6]
[3]
[1]
[3]]
[[6]
[3]
[3]
[3]
[3]
[1]
[6]
[6]]
[[6]
[3]
[3]
[3]
[1]
[6]
[3]
[6]
[1]
[3]
[6]]
[[6]
[3]
[3]
[3]
[1]
[1]
[3]
[6]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[8]
[3]
[6]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[8]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[3]
[3]
[3]
[6]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[8]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[6]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[3]]
[[6]
[3]
[3]
[3]
[3]
[6]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]
[3]]
[[6]
[3]
[3]
[6]
[3]]
[[6]
[3]
[6]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]]
[[6]
[6]
[3]
[3]
[3]]
[[6]
[6]
[3]
[3]
[7]]
[[6]
[6]
[3]
[3]
[7]]
[[6]
[7]
[6]
[3]]
[[6]
[6]
[7]]
[[6]]
[[6]]
()
()
()
()
[[1]]
[[1]]
()
[[3]
[3]]
[[3]
[3]]
[[3]
[3]
[1]]
[[3]
[3]
[6]
[8]]
[[3]
[3]
[1]
[3]
[1]
[6]]
[[3]
[3]
[6]
[1]]
[[3]
[1]
[3]
[6]]
[[3]
[6]
[3]]
[[3]
[1]
[3]
[6]
[3]
[3]]
[[3]
[6]
[1]
[3]
[6]
[3]]
[[3]
[6]
[6]
[3]]
[[3]
[6]
[1]
[3]
[6]
[3]]
[[3]
[3]
[6]
[1]]
[[3]
[3]
[3]
[6]]
[[3]
[6]
[3]
[6]]
[[3]
[3]
[6]
[6]
[8]]
[[3]
[3]
[6]]
[[3]
[6]
[3]]
[[6]
[3]
[3]
[3]]
[[3]
[6]
[3]
[3]
[6]]
[[6]
[3]
[3]]
[[3]
[3]
[6]
[3]]
[[1]
[3]
[3]
[6]]
[[6]
[3]
[1]
[3]]
[[6]
[3]
[3]
[1]]
[[3]
[6]
[3]
[1]]
[[1]
[3]
[3]
[6]
[6]]
[[3]
[6]
[1]
[3]
[1]]
[[3]
[1]
[3]
[6]]
[[6]
[1]
[3]
[3]
[1]]
[[1]
[6]
[3]
[1]
[3]
[8]]
[[1]
[3]
[3]
[1]
[8]]
[[1]
[3]
[3]
[8]]
[[1]
[3]
[3]]
[[1]
[3]
[3]
[8]]
[[3]
[1]
[3]]
[[3]
[3]
[1]
[2]]
[[3]
[1]
[3]]
[[3]
[1]
[3]]
[[3]
[1]
[3]
[1]]
[[3]
[1]
[3]
[1]
[2]]
[[3]
[3]
[1]
[1]
[2]
[3]]
[[3]
[3]
[1]
[2]]
[[3]
[1]
[3]
[1]]
[[3]
[1]
[3]
[1]
[1]]
[[3]
[3]
[1]
[1]
[1]]
[[3]
[3]
[2]
[1]
[1]]
[[3]
[1]
[3]]
[[ 3]
[ 1]
[ 3]
[64]
[ 2]
[ 1]]
[[3]
[1]
[3]
[1]
[1]]
[[1]
[3]
[3]
[1]]
[[3]
[1]
[3]
[2]
[1]]
[[3]
[1]
[2]
[3]]
[[ 3]
[ 2]
[64]
[ 3]]
[[3]
[1]
[3]
[2]
[1]]
[[3]
[3]
[1]
[3]]
[[3]
[3]
[1]
[3]
[1]
[1]]
[[ 3]
[ 2]
[64]
[ 1]
[ 3]]
[[3]
[3]
[2]
[3]
[1]
[1]]
[[ 3]
[ 3]
[64]
[ 1]
[ 2]]
[[3]
[1]]
[[3]
[3]
[1]]
[[3]
[1]
[3]
[1]
[6]]
[[3]
[1]
[3]
[2]
[1]]
[[3]
[1]
[6]
[1]
[2]]
[[3]
[2]
[6]
[1]
[1]]
[[3]
[2]
[1]
[3]
[1]
[1]]
[[1]
[3]
[1]
[6]
[2]
[1]
[1]]
[[3]
[1]
[1]
[3]
[1]
[3]]
[[3]
[1]
[1]
[1]
[6]
[3]
[2]]
[[1]
[3]
[6]
[1]
[1]
[1]
[3]]
[[3]
[6]
[1]
[2]
[3]
[1]
[1]
[1]]
[[3]
[6]
[1]
[1]
[3]
[1]
[1]]
[[3]
[6]
[1]
[1]
[1]
[1]
[3]
[1]]
[[3]
[6]
[3]
[1]
[1]
[3]
[1]
[1]
[1]]
[[3]
[2]
[6]
[1]
[1]
[1]]
[[3]
[2]
[1]
[6]
[1]
[1]
[2]
[1]
[1]]
[[3]
[1]
[2]
[6]
[1]
[2]
[1]
[3]]
[[3]
[6]
[1]
[1]
[2]
[1]
[1]
[3]]
[[3]
[6]
[2]
[1]
[1]]
[[3]
[1]
[2]
[6]]
[[3]
[6]
[2]
[1]
[3]]
[[3]
[2]
[6]
[1]
[2]
[1]]
[[2]
[3]
[6]
[1]
[2]]
[[3]
[2]
[3]]
[[2]
[3]
[2]
[1]
[1]
[2]
[3]
[2]
[1]]
[[3]
[2]
[2]
[1]
[1]
[1]
[1]
[2]]
[[2]
[3]
[2]
[2]
[1]
[1]
[1]
[1]]
[[2]
[3]
[4]
[1]
[2]
[1]
[6]]
[[3]
[1]
[1]
[1]
[2]]
[[3]
[1]
[1]
[6]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[2]
[1]]
[[1]
[3]
[1]
[1]
[6]
[1]]
[[3]
[1]
[6]
[1]]
[[1]
[3]
[2]
[1]
[1]
[3]
[6]]
[[3]
[1]
[1]
[2]
[1]
[6]]
[[3]
[1]
[1]
[6]
[3]
[1]]
[[3]
[1]
[1]
[1]
[6]
[3]
[1]
[3]
[1]]
[[1]
[3]
[1]
[1]
[6]
[3]
[3]]
[[3]
[1]
[1]
[1]
[6]
[3]
[3]]
[[3]
[1]
[1]
[1]
[3]
[6]
[3]
[1]
[1]]
[[3]
[1]
[3]
[1]
[6]
[3]
[1]
[1]
[1]
[1]]
[[3]
[1]
[3]
[1]
[6]
[1]
[3]
[1]]
[[3]
[1]
[3]
[6]
[1]
[3]
[1]
[1]
[3]]
[[3]
[1]
[1]
[3]
[6]
[3]
[1]
[3]
[1]]
[[3]
[1]
[1]
[6]
[1]
[3]
[1]]
[[3]
[1]
[1]
[6]
[1]
[3]
[2]]
[[3]
[1]
[6]
[1]
[3]
[2]
[1]
[2]]
[[3]
[1]
[6]
[1]
[2]
[3]
[1]]
[[1]
[3]
[6]
[3]
[1]
[1]
[4]]
[[3]
[1]
[6]
[1]]
[[3]
[1]
[3]
[6]
[1]
[1]]
[[1]
[3]
[1]
[6]
[3]
[1]
[1]
[2]]
[[3]
[1]
[6]
[1]
[1]
[1]]
[[3]
[1]
[6]
[1]
[1]
[3]]
[[3]
[1]
[6]
[1]
[3]]
[[3]
[1]
[3]
[1]
[2]
[6]
[2]
[3]]
[[3]
[1]
[3]
[6]
[2]
[1]]
[[3]
[3]
[1]
[2]
[1]
[6]]
[[3]
[3]
[1]
[2]
[6]
[1]]
[[3]
[3]
[1]
[6]
[2]
[3]]
[[3]
[3]
[1]]
[[3]
[1]
[3]]
[[3]
[3]
[6]
[1]]
[[3]
[6]
[3]
[1]
[1]
[6]]
[[3]
[6]
[3]
[1]
[1]]
[[3]
[1]
[3]
[6]
[3]
[4]]
[[3]
[4]
[6]
[3]
[1]
[3]]
[[3]
[4]
[3]
[1]
[6]]
[[3]
[3]
[1]
[1]
[4]]
[[3]
[3]
[1]]
[[3]
[1]
[3]
[1]
[6]
[2]]
[[3]
[6]
[3]
[6]
[1]]
[[3]
[3]
[1]]
[[3]
[4]
[1]
[2]
[1]]
[[3]
[1]
[2]
[4]
[3]
[1]]
[[3]
[1]
[1]
[2]]
[[3]
[1]
[4]
[1]]
[[3]
[4]
[1]
[1]]
[[3]
[1]
[1]]
[[3]
[1]
[3]
[4]
[1]]
[[3]
[1]
[1]
[4]]
[[3]
[1]
[1]]
[[3]
[1]
[1]]
[[3]
[1]
[1]]
[[3]
[2]
[3]
[1]
[1]]
[[3]
[1]
[2]
[1]]
[[3]
[1]]
[[3]
[1]
[1]]
[[3]
[1]]
[[3]
[1]
[1]
[2]
[2]
[3]]
[[3]
[1]
[1]
[2]
[1]
[3]]
[[3]
[1]
[3]
[2]
[1]
[1]]
[[3]
[1]
[1]
[2]
[1]
[3]]
[[3]
[1]
[1]
[4]
[3]
[2]]
[[3]
[1]
[1]
[6]
[3]]
[[3]
[1]
[3]
[1]]
[[3]
[2]
[1]
[6]
[2]
[2]
[2]
[3]]
[[3]
[2]
[6]
[2]
[1]
[2]
[3]]
[[3]
[2]
[6]
[2]
[3]
[1]
[2]
[1]]
[[3]
[2]
[2]
[1]
[1]
[1]
[6]
[3]]
[[3]
[2]
[2]
[1]
[2]
[2]
[3]]
[[2]
[3]
[2]
[1]
[1]
[3]
[6]
[2]]
[[3]
[2]
[2]
[3]
[2]
[1]
[6]]
[[2]
[3]
[2]
[3]
[2]
[1]
[3]]
[[2]
[3]
[2]
[2]
[3]
[1]
[3]
[1]]
[[2]
[3]
[2]
[1]
[3]
[2]
[6]]
[[3]
[2]
[1]
[2]
[6]
[3]]
[[3]
[2]
[1]
[6]]
[[3]
[6]
[2]
[1]
[2]]
[[3]
[1]
[6]
[2]
[2]
[2]
[3]]
[[3]
[2]
[6]
[1]
[2]
[2]
[2]]
[[3]
[2]
[1]
[6]
[2]
[1]
[2]]
[[1]
[2]
[3]
[2]
[6]
[2]
[2]
[3]]
[[3]
[1]
[6]
[2]
[2]
[2]
[2]]
[[3]
[1]
[2]
[6]
[2]
[2]
[2]
[2]
[3]]
[[3]
[1]
[2]
[2]
[6]
[2]
[2]
[3]]
[[3]
[1]
[2]
[6]
[2]
[3]
[2]
[2]
[3]]
[[3]
[2]
[1]
[2]
[2]
[6]
[2]
[2]
[1]]
[[3]
[1]
[2]
[1]
[6]
[2]
[2]
[2]]
[[1]
[3]
[6]
[2]
[2]
[1]
[2]]
[[1]
[3]
[6]
[2]
[2]
[2]
[3]
[4]]
[[2]
[2]
[2]
[1]
[3]
[2]
[6]
[2]
[1]
[3]]
[[2]
[2]
[3]
[2]
[1]
[6]
[1]
[3]]
[[3]
[2]
[1]
[2]
[3]
[6]
[2]
[3]]
[[3]
[2]
[2]
[1]
[2]
[6]
[3]
[2]
[4]]
[[3]
[2]
[2]
[1]
[2]
[6]
[3]
[4]
[4]
[1]]
[[3]
[2]
[2]
[1]
[2]
[6]
[4]
[3]
[1]
[3]
[2]]
[[3]
[1]
[6]
[3]
[2]
[2]
[3]
[2]
[3]]
[[3]
[2]
[2]
[1]
[2]
[6]
[3]
[4]
[3]
[1]]
[[3]
[1]
[6]
[1]]
[[6]
[1]
[3]
[2]
[2]]
[[3]
[1]
[6]
[2]
[2]
[2]
[3]]
[[1]
[3]
[6]
[1]
[2]]
[[1]
[2]
[3]
[6]
[6]]
[[1]
[2]
[6]
[6]]
[[1]
[2]
[6]
[6]
[9]]
[[1]
[9]]
()
()
()
[[4]
[1]]
[[4]
[1]
[4]]
[[4]
[4]
[1]
[2]
[1]
[1]]
[[4]
[1]
[4]
[1]
[6]]
[[1]
[4]
[3]
[1]]
[[1]
[4]
[1]
[3]
[2]
[6]
[1]
[2]]
[[1]
[1]
[3]
[4]
[2]
[1]]
[[1]
[4]
[2]
[3]
[1]
[1]
[1]
[3]]
[[3]
[1]
[2]
[1]
[2]
[3]
[3]
[1]
[4]
[1]]
[[1]
[2]
[1]
[3]
[3]
[2]
[1]
[1]
[3]
[1]]
[[1]
[3]
[1]
[3]
[2]
[2]
[1]
[3]
[4]
[1]]
[[1]
[3]
[1]
[3]
[3]
[1]
[2]
[1]]
[[1]
[4]
[3]
[1]
[3]
[3]
[1]
[1]
[1]
[1]]
[[1]
[3]
[3]
[2]
[3]
[1]
[1]
[4]]
[[1]
[3]
[3]
[2]
[4]
[3]
[1]
[3]
[1]]
[[1]
[3]
[3]
[1]
[4]
[3]
[3]
[2]
[1]]
[[4]
[3]
[1]
[3]
[1]
[1]
[1]
[1]
[4]
[1]
[3]]
[[4]
[3]
[4]
[1]
[1]
[1]
[3]
[3]
[4]
[1]
[1]]
[[1]
[4]
[3]
[1]
[4]
[1]
[3]
[1]]
[[4]
[1]
[4]
[1]
[6]
[3]
[1]
[1]]
[[1]
[4]
[6]
[4]
[4]
[3]
[1]
[1]
[1]]
[[4]
[1]
[6]
[4]
[4]
[3]
[1]
[1]
[1]
[1]]
[[1]
[4]
[1]
[1]
[1]
[4]
[1]]
[[1]
[4]
[1]
[3]
[4]
[3]
[6]
[1]
[1]]
[[6]
[4]
[1]
[4]
[1]
[1]
[1]
[1]
[3]
[1]]
[[6]
[1]
[3]
[1]
[1]
[3]
[4]
[1]
[4]
[1]]
[[1]
[1]
[1]
[3]
[3]
[1]
[1]
[1]
[4]]
[[3]
[1]
[1]
[1]
[1]
[4]]
[[3]
[1]
[3]
[1]
[3]
[1]
[1]
[1]
[1]]
[[3]
[3]
[1]
[1]
[1]
[1]
[3]
[3]
[1]]
[[1]
[3]
[1]
[1]
[1]
[6]
[1]]
[[1]
[1]
[1]
[1]
[3]
[6]
[4]]
[[3]
[1]
[4]
[1]
[3]
[1]
[1]
[1]
[3]
[1]
[4]]
[[1]
[1]
[1]
[1]
[1]
[3]
[3]
[3]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]
[4]]
[[4]
[1]
[1]
[1]
[6]
[3]
[1]
[2]
[1]
[3]
[3]]
[[1]
[1]
[4]
[1]
[3]
[4]
[1]
[2]
[4]]
[[4]
[3]
[4]
[3]
[1]
[1]
[1]
[1]
[6]
[1]]
[[4]
[4]
[1]
[3]
[1]
[1]
[3]
[1]
[1]
[1]
[1]
[3]]
[[4]
[4]
[3]
[1]
[3]
[1]
[1]
[1]
[1]]
[[4]
[4]
[1]
[3]
[1]
[4]]
[[4]
[4]
[1]
[4]
[6]
[3]
[1]]
[[1]
[3]
[1]
[4]
[4]
[1]
[1]]
[[1]
[1]
[3]
[4]
[1]
[1]
[4]]
[[4]
[1]
[3]
[1]
[1]
[4]]
[[1]
[1]
[4]
[3]
[1]
[1]]
[[3]
[3]
[1]
[1]
[1]
[1]
[4]
[3]
[1]]
[[3]
[3]
[1]
[1]
[1]
[4]
[1]]
[[4]
[3]
[1]
[4]
[1]
[3]]
[[4]
[3]
[1]
[3]
[1]
[1]
[3]]
[[3]
[4]
[3]
[1]
[4]]
[[3]
[4]
[3]
[1]]
[[3]
[4]
[1]
[4]
[1]]
[[1]
[3]
[1]]
[[1]
[3]
[3]
[1]
[1]]
[[3]
[1]
[3]
[1]
[4]]
[[3]
[3]
[1]
[1]]
[[3]
[1]
[1]]
[[1]
[3]]
[[1]
[2]
[2]]
[[1]]
[[1]]
[[2]
[1]
[3]]
[[3]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[1]]
[[2]
[1]
[1]
[3]
[1]
[2]]
[[1]
[1]
[3]
[4]
[1]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[3]
[4]
[3]
[1]]
[[1]
[1]
[3]
[3]]
[[4]
[3]
[1]
[3]
[1]]
[[1]
[4]
[3]
[1]
[3]]
[[1]
[3]]
[[3]
[3]
[1]
[3]]
[[3]
[1]
[3]
[3]
[2]]
[[3]
[1]
[3]
[8]
[3]
[6]]
[[1]
[3]
[6]
[3]
[3]]
[[1]
[6]
[3]
[3]
[1]
[4]]
[[1]
[3]
[3]
[6]
[3]]
[[3]
[3]
[1]
[3]
[2]]
[[3]
[8]
[3]
[3]
[1]
[4]
[1]
[6]
[1]]
[[3]
[4]
[6]
[8]
[3]
[1]
[1]
[1]
[3]]
[[3]
[3]
[3]
[1]
[1]
[1]
[4]]
[[3]
[3]
[1]
[1]
[1]
[1]
[4]]
[[3]
[3]
[1]
[1]
[4]
[1]]
[[3]
[1]
[4]
[1]
[3]
[3]]
[[4]
[1]
[3]
[1]]
[[3]
[3]
[3]
[1]
[1]
[6]
[1]]
[[1]
[3]
[3]
[1]
[1]
[1]
[6]]
[[3]
[4]
[3]
[6]
[1]
[1]]
[[3]
[3]
[4]
[1]
[1]
[6]
[1]
[3]]
[[3]
[3]
[1]
[1]
[4]
[3]]
[[4]
[1]
[3]
[3]
[3]
[1]
[1]]
[[4]
[1]
[3]
[3]
[3]
[4]
[1]
[1]
[1]]
[[1]
[4]
[3]
[3]
[3]
[1]
[1]]
[[1]
[3]
[3]
[1]
[4]
[1]
[4]
[1]
[3]]
[[3]
[3]
[1]
[1]
[4]
[1]
[1]
[4]]
[[3]
[1]
[3]
[1]
[1]
[1]
[3]]
[[1]
[3]
[3]
[1]
[1]
[1]]
[[1]
[3]
[1]
[3]
[1]
[3]]
[[1]
[4]
[3]
[3]
[3]
[1]
[1]]
[[1]
[3]
[3]
[1]
[3]
[4]
[1]]
[[3]
[1]
[3]
[3]
[1]]
[[3]
[3]
[1]
[1]
[1]
[1]]
[[3]
[1]
[3]
[1]]
[[3]
[3]]
[[3]
[3]
[1]
[8]
[1]]
[[3]
[1]
[3]
[1]]
[[3]
[1]
[3]
[1]
[1]
[1]
[8]]
[[3]
[3]
[1]
[8]]
[[1]
[3]
[3]
[1]
[8]]
[[1]
[4]
[3]
[3]
[1]]
[[1]
[3]]
[[8]
[3]
[4]]
[[4]
[3]
[8]
[3]
[1]]
[[4]
[3]
[1]
[1]]
[[3]
[1]
[4]
[1]
[3]]
[[1]
[3]
[3]]
[[1]
[3]
[3]
[4]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[3]]
[[1]
[3]
[3]
[1]
[3]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[1]
[6]
[3]]
[[1]
[3]
[3]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[3]
[3]
[1]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[1]]
[[1]
[3]
[1]
[3]
[1]]
[[1]
[3]
[1]
[1]
[3]]
[[1]
[3]
[3]
[1]
[3]
[1]
[3]
[3]]
[[1]
[3]
[1]
[1]
[3]]
[[1]
[3]
[3]
[1]
[3]]
[[1]
[3]
[3]
[1]
[3]]
[[1]
[3]
[1]]
[[1]
[3]
[1]
[1]
[3]]
[[1]
[3]
[3]
[1]]
[[1]
[1]
[3]
[3]]
[[1]
[1]
[3]
[3]]
[[1]
[1]
[3]
[3]]
[[1]
[3]
[3]
[1]
[3]
[2]]
[[1]
[1]
[3]
[3]]
[[1]
[3]
[1]
[3]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[3]
[1]
[3]
[3]]
[[3]
[1]
[1]
[1]
[3]
[3]
[1]]
[[1]
[3]
[6]
[1]
[3]
[3]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]
[3]
[3]]
[[1]
[3]
[1]
[1]
[1]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[3]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[1]
[3]
[1]
[3]
[3]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[6]]
[[1]
[1]
[1]
[1]
[1]
[1]
[3]
[6]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]]
[[1]
[1]
[1]]
[[1]
[1]]
[[1]]
[[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[3]
[3]]
[[1]
[3]
[1]
[1]
[3]
[3]]
[[1]
[3]
[3]
[1]
[1]
[3]]
[[1]
[3]
[3]
[1]
[1]]
[[1]
[3]
[3]
[1]]
[[1]
[3]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[1]
[1]
[1]
[3]
[3]]
[[1]
[3]
[1]
[3]
[3]
[1]
[3]
[1]]
[[1]
[3]
[3]
[3]
[3]
[1]]
[[1]
[3]
[3]
[1]
[1]
[3]]
[[1]
[1]
[3]
[3]
[3]
[1]
[1]]
[[1]
[3]
[1]
[1]
[3]
[3]]
[[1]
[3]
[3]
[3]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[1]
[3]
[1]
[3]
[3]]
[[1]
[1]
[1]
[3]
[1]
[3]
[1]
[3]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[3]
[3]
[1]]
[[1]
[3]
[1]
[3]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[6]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[3]
[9]]
[[1]
[1]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[1]
[3]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]
[3]
[3]]
[[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[3]
[1]
[3]]
[[1]
[1]
[1]
[3]
[3]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[3]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[3]
[1]
[3]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[1]
[1]
[1]]
[[1]
[1]
[3]
[1]
[3]
[1]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[3]]
[[1]
[3]
[1]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]
[3]]
[[1]
[1]
[3]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]
[3]
[1]]
[[1]
[1]
[1]
[3]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]
[3]
[8]]
[[1]
[1]
[1]
[1]
[3]
[1]]
[[1]
[3]
[1]
[3]
[1]]
[[1]
[3]
[8]
[3]
[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]]
[[1]
[1]
[1]]
[[1]
[3]
[1]
[1]
[4]]
[[1]
[3]
[1]]
[[1]
[1]
[3]]
[[1]
[3]
[4]
[1]
[6]
[1]]
[[1]
[4]
[6]
[3]]
[[1]
[4]
[3]
[6]
[1]]
[[1]
[1]
[4]
[3]
[6]]
[[4]
[1]
[1]
[1]]
[[4]
[1]
[1]
[1]
[1]
[3]]
[[3]
[4]
[4]
[1]
[1]
[4]
[6]
[6]
[1]
[1]]
[[4]
[3]
[1]
[1]
[4]
[6]
[4]
[6]
[1]]
[[4]
[3]
[4]
[1]
[6]
[1]
[1]
[1]]
[[4]
[4]
[1]
[1]
[1]
[3]
[1]]
[[1]
[4]
[1]
[3]
[6]
[4]]
[[1]
[6]
[3]
[6]]
[[1]
[6]
[3]
[1]
[6]
[1]]
[[1]
[3]
[6]
[4]
[1]
[4]]
[[1]
[6]
[1]
[3]
[4]
[1]
[1]]
[[1]
[1]
[1]]
[[1]
[1]
[4]
[1]]
[[1]
[1]
[3]
[1]]
[[1]
[4]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[3]
[1]
[1]]
[[1]
[1]
[1]
[4]
[4]]
[[1]
[1]
[1]
[1]]
[[1]
[1]
[1]
[1]
[4]]
[[1]
[1]
[1]
[1]
[4]]
[[4]
[1]
[1]
[1]
[1]
[3]
[1]]
[[4]
[1]
[1]
[1]
[1]
[1]
[4]]
[[1]
[4]
[1]
[2]]
[[1]
[1]
[1]
[1]
[4]]
[[1]
[3]
[4]
[1]
[1]
[1]
[3]
[1]]
[[4]
[1]
[1]
[3]
[3]
[1]
[1]
[1]
[4]]
[[1]
[3]
[1]
[1]
[4]
[1]]
[[1]
[1]
[4]
[1]
[4]
[3]]
[[1]
[1]
[4]
[1]
[1]]
[[1]
[4]
[3]
[1]
[1]]
[[4]
[1]
[1]
[1]
[1]
[1]]
[[4]
[1]
[1]
[1]
[1]]
[[4]
[1]
[3]
[1]
[1]
[1]
[1]
[1]]
[[3]
[4]
[1]
[1]
[1]
[1]
[6]
[3]]
[[3]
[4]
[1]
[1]
[3]
[6]
[1]
[3]
[3]]
[[3]
[4]
[1]
[1]
[3]
[1]]
[[3]
[1]
[1]
[1]]
[[3]
[6]
[1]
[3]]
[[3]
[1]
[3]
[1]
[3]]
[[3]
[1]]
[[3]
[3]]
[[3]]
[[3]
[1]
[3]
[4]
[3]]
[[3]
[3]
[4]
[4]
[1]]
[[3]
[3]
[1]
[1]
[4]
[1]]
[[3]
[3]
[1]
[4]
[1]
[4]]
[[3]
[3]
[4]
[1]]
[[1]
[4]
[3]
[3]
[1]]
[[4]
[3]
[1]
[3]
[1]
[3]]
[[1]
[4]
[3]
[3]
[1]
[3]
[3]]
[[4]
[1]
[3]
[3]
[1]
[3]]
[[1]
[4]
[3]
[3]
[1]
[3]]
[[3]
[3]
[1]
[4]
[1]
[3]
[3]]
[[3]
[3]
[1]
[4]
[1]
[3]]
[[3]
[1]
[3]
[1]]
[[1]
[3]
[4]
[1]
[3]]
[[3]
[1]
[4]
[1]
[1]]
[[3]
[1]
[3]
[1]
[3]]
[[3]
[1]
[4]
[3]
[1]
[3]]
[[1]
[3]
[4]
[1]]
[[4]
[3]
[1]
[1]
[1]]
[[1]
[4]
[1]
[1]
[3]
[1]
[1]
[3]]
[[1]
[3]
[1]
[3]
[4]
[1]
[6]]
[[1]
[3]
[1]
[4]
[1]
[4]
[1]
[3]]
[[3]
[4]
[1]
[3]
[1]]
[[3]
[4]
[3]
[1]]
[[4]
[3]
[3]
[1]
[3]]
[[4]
[3]
[1]
[3]
[3]]
[[4]
[3]
[1]
[1]]
[[4]
[1]]
[[1]
[6]
[3]]
[[1]
[3]
[1]
[1]]
[[3]
[1]
[1]
[1]
[6]]
[[1]
[1]
[4]
[1]
[1]
[3]]
[[1]
[3]
[1]
[1]
[1]
[4]]
[[1]
[1]
[1]
[3]
[1]
[1]
[4]]
[[1]
[4]
[3]
[1]
[1]
[1]
[1]
[4]]
[[1]
[4]
[3]
[1]
[4]
[1]
[1]
[1]
[6]]
[[1]
[3]
[1]
[1]
[1]
[1]
[6]
[4]]
[[1]
[3]
[6]
[1]
[1]
[6]
[4]
[8]]
[[1]
[6]
[1]
[1]
[3]
[8]]
[[1]
[6]
[8]]
[[6]
[1]]
[[6]
[1]
[8]]
[[6]
[1]]
()
()
()
()
()
[[1]]
[[1]
[1]]
[[6]
[6]
[8]
[1]]
[[6]
[6]]
[[6]
[6]
[8]
[1]]
[[6]
[6]
[1]
[1]
[3]
[6]
[8]
[1]]
[[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[1]
[6]
[8]
[3]]
[[6]
[6]
[1]
[6]
[3]]
[[6]
[6]
[6]
[1]
[3]
[1]
[1]]
[[6]
[3]
[1]
[6]
[6]
[1]]
[[6]
[3]
[6]
[6]
[1]
[1]]
[[6]
[3]
[6]
[6]
[6]
[1]
[1]]
[[6]
[6]
[6]
[1]
[1]
[3]
[6]]
[[6]
[3]
[1]
[6]
[6]
[6]
[1]]
[[6]
[1]
[6]
[6]
[3]
[6]]
[[6]
[1]
[6]
[3]
[6]
[6]]
[[6]
[6]
[1]
[6]
[3]]
[[6]
[6]
[1]
[6]]
[[6]
[6]
[1]
[1]
[6]
[6]]
[[6]
[1]
[6]
[6]
[6]]
[[1]
[6]
[6]
[6]
[6]
[1]
[1]]
[[1]
[6]
[6]
[6]
[6]
[1]
[1]]
[[6]
[1]
[6]
[6]
[1]]
[[6]
[1]
[6]
[6]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[6]
[1]]
[[6]
[6]
[6]
[6]
[3]
[1]
[1]
[1]]
[[3]
[6]
[1]
[6]
[6]]
[[6]
[6]
[3]
[1]
[6]
[1]]
[[3]
[6]
[1]
[6]
[6]
[6]
[1]
[1]
[1]]
[[3]
[6]
[1]
[6]
[6]
[1]
[1]]
[[3]
[6]
[6]
[6]
[6]
[1]
[3]]
[[3]
[6]
[6]
[6]
[6]
[1]]
[[3]
[6]
[6]
[6]
[6]
[1]]
[[3]
[6]
[6]
[1]
[6]
[6]
[1]]
[[3]
[6]
[6]
[1]
[6]]
[[3]
[6]
[6]
[6]
[1]]
[[6]
[3]
[6]
[1]
[1]]
[[6]
[3]
[1]
[6]
[1]
[6]
[6]
[1]]
[[6]
[6]
[3]
[1]
[6]
[1]]
[[6]
[6]
[3]
[6]
[1]]
[[6]
[6]
[3]
[3]
[1]
[1]
[6]]
[[6]
[3]
[6]
[6]
[3]
[6]
[1]
[1]
[1]]
[[3]
[6]
[6]
[6]
[1]
[3]]
[[6]
[6]
[6]
[3]]
[[6]
[6]
[3]
[6]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[1]]
[[6]
[6]
[6]
[3]
[1]
[1]]
[[6]
[6]
[6]
[3]
[6]
[1]]
[[6]
[6]
[6]
[3]
[6]
[1]]
[[6]
[6]
[3]
[6]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[1]]
[[6]
[6]
[3]
[6]
[1]
[3]
[1]]
[[6]
[6]
[6]
[3]
[1]
[1]
[3]]
[[6]
[6]
[6]
[3]
[1]
[1]]
[[6]
[6]
[6]
[3]
[1]
[1]]
[[6]
[6]
[3]
[6]]
[[6]
[6]
[6]
[3]
[1]
[1]]
[[6]
[6]
[6]
[3]
[1]
[1]
[1]]
[[ 6]
[ 6]
[ 6]
[ 3]
[ 1]
[64]]
[[ 6]
[ 6]
[ 6]
[64]
[ 1]
[ 1]
[ 3]
[ 8]]
[[6]
[6]
[6]
[3]
[1]]
[[6]
[6]
[3]
[6]]
[[6]
[6]
[3]
[1]
[8]
[1]]
[[6]
[6]
[6]
[3]
[1]
[1]
[8]]
[[6]
[6]
[3]
[1]
[1]
[6]]
[[6]
[3]
[6]
[6]
[8]]
[[6]
[3]
[6]
[6]
[8]]
[[3]
[6]
[6]
[8]]
[[3]
[6]
[8]
[6]]
[[3]
[6]
[6]
[8]
[1]]
[[3]
[6]
[6]
[8]]
[[3]
[6]
[6]
[8]]
[[3]
[6]
[6]
[8]]
[[3]
[6]
[6]
[8]]
[[3]
[6]
[6]
[8]
[1]]
[[3]
[6]
[6]
[8]
[1]
[3]
[1]
[1]]
[[3]
[1]
[6]
[6]
[3]
[1]
[8]]
[[6]
[3]
[1]
[3]
[6]
[8]]
[[6]
[3]
[1]
[3]
[6]
[1]]
[[6]
[1]
[3]
[6]
[3]
[8]]
[[6]
[6]
[1]
[3]
[3]
[1]
[8]]
[[6]
[1]
[6]
[3]
[3]
[8]]
[[6]
[1]
[3]
[6]
[1]
[3]
[1]]
[[6]
[1]
[3]
[6]
[3]
[2]
[1]
[1]
[1]]
[[1]
[6]
[3]
[6]
[3]
[1]
[8]]
[[1]
[6]
[3]
[1]
[6]
[3]]
[[1]
[6]
[6]
[1]
[3]
[3]]
[[1]
[6]
[1]
[3]
[6]
[3]
[1]]
[[6]
[1]
[4]
[1]
[6]
[1]
[3]]
[[6]
[1]
[1]
[6]
[1]
[3]
[6]]
[[6]
[1]
[6]
[1]
[1]]
[[6]
[1]
[1]
[6]
[1]
[3]
[6]]
[[6]
[1]
[1]
[6]
[1]
[6]
[3]
[4]
[8]]
[[6]
[1]
[4]
[1]
[6]
[1]
[3]
[6]
[4]]
[[ 1]
[ 1]
[ 1]
[ 6]
[ 6]
[ 4]
[ 3]
[64]
[ 1]
[ 6]]
[[1]
[1]
[6]
[1]
[4]
[6]
[4]
[6]
[3]
[1]]
[[1]
[1]
[1]
[6]
[6]
[3]
[4]]
[[1]
[4]
[6]
[1]
[1]
[3]
[4]
[6]
[1]]
[[1]
[1]
[6]
[6]
[1]
[4]
[3]
[8]
[4]
[1]]
[[1]
[6]
[6]
[1]
[3]
[4]
[4]
[1]
[8]]
[[1]
[6]
[3]
[6]
[4]
[4]
[6]]
[[1]
[6]
[3]
[6]
[6]
[1]
[1]
[4]
[4]]
[[6]
[3]
[1]
[1]
[6]
[6]
[1]
[8]]
[[6]
[1]
[3]
[6]
[6]]
[[1]
[6]
[3]
[6]
[6]
[8]
[1]]
[[6]
[1]
[3]
[6]
[8]
[6]
[8]]
[[1]
[6]
[3]
[6]
[8]
[6]
[8]]
[[1]
[6]
[3]
[6]
[8]
[8]
[6]
[4]
[1]]
[[1]
[6]
[3]
[6]
[4]
[8]
[8]]
[[1]
[3]
[6]
[6]
[1]
[8]
[6]]
[[1]
[6]
[6]
[3]
[6]
[8]
[1]
[8]]
[[1]
[6]
[6]
[3]
[8]
[1]]
[[1]
[6]
[3]
[6]
[1]
[4]
[1]
[6]
[8]
[1]]
[[1]
[3]
[6]
[4]
[6]
[1]
[8]
[1]
[1]]
[[1]
[6]
[3]
[4]
[1]
[6]
[8]
[6]]
[[1]
[6]
[4]
[3]
[1]
[4]
[6]
[8]]
[[1]
[3]
[6]
[1]
[1]
[4]
[1]]
[[1]
[3]
[6]
[1]
[6]
[1]
[8]
[3]]
[[3]
[1]
[6]
[1]
[8]
[6]
[1]
[3]
[1]]
[[3]
[6]
[1]
[6]
[1]
[8]
[1]
[4]
[3]]
[[6]
[3]
[4]
[6]
[1]
[1]
[1]
[1]
[8]
[1]
[6]]
[[6]
[6]
[3]
[1]
[1]
[8]
[4]
[1]
[1]
[1]
[6]]
[[6]
[6]
[4]
[1]
[1]
[3]
[1]
[8]]
[[6]
[6]
[3]
[1]
[1]
[4]
[1]]
[[6]
[6]
[1]
[3]
[4]
[1]
[1]
[8]
[1]]
[[1]
[6]
[6]
[4]
[1]
[3]
[8]
[6]
[1]
[3]
[1]]
[[6]
[3]
[1]
[6]
[1]
[4]
[1]
[1]
[8]
[3]
[3]]
[[6]
[3]
[6]
[1]
[1]
[3]
[3]
[1]
[1]
[8]
[4]]
[[6]
[6]
[3]
[1]
[1]
[1]
[1]
[3]
[3]
[4]
[1]
[8]]
[[6]
[6]
[1]
[3]
[1]
[3]
[8]
[1]
[1]
[1]
[3]
[4]]
[[6]
[6]
[3]
[1]
[1]
[8]
[1]
[3]
[1]]
[[6]
[3]
[1]
[6]
[1]
[8]
[4]
[1]
[3]]
[[6]
[1]
[6]
[1]
[3]
[1]
[8]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[3]
[8]
[1]
[1]]
[[6]
[1]
[6]
[1]
[3]
[1]
[8]
[1]]
[[1]
[6]
[6]
[1]
[3]
[8]
[1]
[4]]
[[1]
[6]
[6]
[1]
[3]
[8]
[1]
[1]
[1]
[1]]
[[1]
[6]
[6]
[3]
[8]
[1]
[1]
[1]]
[[1]
[6]
[6]
[3]
[1]
[1]]
[[1]
[6]
[6]
[1]
[3]
[1]
[1]
[8]]
[[1]
[6]
[6]
[1]
[1]
[1]
[8]
[1]]
[[1]
[6]
[6]
[1]
[1]
[1]
[1]
[1]
[8]]
[[1]
[6]
[6]
[8]
[1]
[1]
[1]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[3]
[8]
[1]
[1]
[1]
[1]
[3]]
[[6]
[1]
[6]
[1]
[1]
[1]
[8]
[3]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[1]
[3]
[8]
[1]]
[[6]
[1]
[6]
[1]
[1]
[3]
[1]
[1]
[8]
[1]
[3]]
[[6]
[6]
[1]
[1]
[1]
[1]
[8]
[1]
[1]]
[[6]
[6]
[1]
[1]
[1]
[1]
[1]
[8]
[3]
[1]]
[[6]
[1]
[6]
[1]
[1]
[8]
[1]]
[[6]
[1]
[6]
[1]
[1]
[8]
[1]]
[[6]
[1]
[6]
[1]
[1]
[8]
[3]]
[[6]
[1]
[6]
[1]
[1]
[3]
[3]]
[[6]
[1]
[6]
[1]
[1]
[3]]
[[6]
[1]
[6]
[1]
[1]
[1]]
[[6]
[1]
[6]
[1]
[8]]
[[6]
[6]
[1]
[1]
[1]
[8]]
[[6]
[1]
[6]
[1]
[1]
[3]
[8]
[1]]
[[6]
[1]
[6]
[1]
[1]
[3]]
[[6]
[6]
[1]
[1]
[8]
[1]]
[[6]
[6]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[3]
[1]
[1]]
[[6]
[6]
[1]
[3]
[1]
[1]]
[[6]
[1]
[6]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]
[1]]
[[6]
[6]
[1]
[1]
[3]
[1]]
[[6]
[6]
[1]
[1]]
[[6]
[1]
[1]
[6]
[1]]
[[6]
[1]
[1]
[6]
[3]]
[[6]
[1]
[6]
[8]
[1]
[3]]
[[6]
[6]
[1]
[3]
[8]
[1]]
[[6]
[6]
[8]
[3]
[1]
[1]]
[[6]
[6]
[8]
[3]
[1]
[1]
[1]
[1]]
[[6]
[8]
[1]
[3]
[6]
[1]]
[[6]
[8]
[3]
[6]
[1]
[1]]
[[6]
[1]
[8]
[6]
[1]
[3]
[1]
[1]]
[[6]
[1]
[3]
[8]
[6]
[1]
[1]
[1]
[1]]
[[6]
[1]
[8]
[6]
[1]
[3]
[1]
[1]
[1]
[3]]
[[6]
[1]
[8]
[1]
[6]
[1]
[3]
[1]
[1]
[3]]
[[6]
[1]
[8]
[1]
[3]
[1]
[6]
[3]
[1]]
[[6]
[1]
[8]
[6]
[1]
[3]
[1]
[3]]
[[6]
[1]
[6]
[1]
[8]
[3]
[1]
[3]]
[[6]
[1]
[6]
[1]
[8]
[6]
[4]]
[[6]
[1]
[6]
[1]
[8]
[6]]
[[6]
[1]
[1]
[6]
[8]
[4]
[6]]
[[6]
[1]
[6]
[1]
[8]]
[[6]
[1]
[6]
[1]
[8]
[1]]
[[6]
[1]
[1]
[6]
[8]
[1]]
[[6]
[1]
[1]
[6]
[8]]
[[6]
[1]
[1]
[6]
[8]]
[[6]
[1]
[1]
[6]
[4]
[8]
[1]
[3]]
[[6]
[1]
[1]
[6]
[8]
[3]
[4]
[1]]
[[6]
[6]
[1]
[1]
[4]
[6]
[1]
[8]]
[[6]
[1]
[1]
[6]
[4]
[8]
[1]
[2]]
[[6]
[1]
[1]
[6]
[1]
[8]
[4]
[2]]
[[1]
[1]
[6]
[6]
[4]
[8]
[2]
[1]]
[[ 6]
[ 1]
[ 1]
[ 6]
[ 1]
[ 2]
[64]]
[[ 6]
[ 1]
[ 6]
[ 1]
[ 2]
[64]
[ 1]
[ 1]]
[[ 6]
[ 1]
[ 6]
[ 1]
[ 2]
[64]]
[[ 6]
[ 1]
[ 1]
[ 6]
[ 2]
[64]]
[[ 6]
[ 1]
[ 6]
[ 1]
[ 2]
[64]]
[[6]
[1]
[6]
[1]
[2]]
[[6]
[1]
[1]
[6]
[2]]
[[6]
[1]
[1]
[6]
[2]]
[[6]
[1]
[6]
[1]
[2]
[6]
[1]]
[[6]
[1]
[6]
[1]
[2]]
[[6]
[1]
[1]
[6]
[2]
[6]]
[[6]
[1]
[6]
[1]
[2]]
[[6]
[1]
[6]
[1]
[6]]
[[6]
[1]
[6]
[1]
[2]
[6]]
[[6]
[1]
[6]
[2]
[1]
[6]]
[[6]
[6]
[2]
[1]
[8]]
[[6]
[8]
[2]
[6]
[6]]
[[6]
[1]
[8]
[6]
[2]
[6]]
[[6]
[8]
[6]
[1]
[6]
[2]]
[[6]
[8]
[1]
[6]
[6]
[1]]
[[ 6]
[ 1]
[ 8]
[64]
[ 6]
[ 6]]
[[ 6]
[ 1]
[ 8]
[ 6]
[64]
[ 6]]
[[ 6]
[ 1]
[ 8]
[ 6]
[64]
[ 6]]
[[6]
[1]
[6]
[8]
[6]
[1]
[1]]
[[6]
[6]
[8]
[6]
[1]]
[[ 6]
[ 1]
[ 6]
[ 6]
[ 8]
[ 1]
[ 1]
[64]]
[[6]
[1]
[6]
[6]
[8]
[1]
[2]
[1]
[3]]
[[ 6]
[ 1]
[ 6]
[ 6]
[ 8]
[ 1]
[ 1]
[64]
[ 3]]
[[6]
[6]
[6]
[1]
[8]
[1]
[1]]
[[6]
[6]
[8]
[1]
[6]
[1]
[1]]
[[6]
[6]
[8]
[1]
[1]
[6]]
[[6]
[6]
[8]
[1]
[1]
[6]]
[[6]
[6]
[8]
[1]
[1]
[1]]
[[6]
[6]
[1]
[8]
[6]
[2]
[1]]
[[6]
[6]
[6]
[1]
[8]
[1]
[3]
[2]
[1]]
[[6]
[6]
[6]
[8]
[1]
[1]
[3]
[6]
[1]]
[[6]
[6]
[6]
[8]
[1]
[3]
[6]]
[[6]
[6]
[6]
[8]
[1]
[1]
[6]]
[[6]
[6]
[6]
[1]
[8]
[1]
[3]
[6]]
[[6]
[6]
[6]
[1]
[6]
[3]
[8]
[1]
[1]]
[[6]
[6]
[6]
[1]
[8]
[6]
[1]]
[[6]
[6]
[1]
[6]
[8]
[1]
[6]
[1]]
[[6]
[6]
[1]
[6]
[8]
[1]
[6]
[1]]
[[6]
[6]
[6]
[1]
[8]
[1]
[6]
[3]
[1]]
[[6]
[6]
[6]
[1]
[8]
[6]
[1]
[1]
[3]
[1]]
[[6]
[6]
[6]
[1]
[8]
[6]
[3]
[1]
[1]
[1]]
[[6]
[6]
[1]
[8]
[6]
[3]
[1]]
[[6]
[6]
[8]
[1]
[6]
[3]
[1]
[1]]
[[6]
[6]
[8]
[1]
[6]
[1]
[1]
[1]]
[[6]
[6]
[8]
[1]
[1]
[6]
[3]
[1]]
[[6]
[6]
[8]
[1]
[6]
[1]
[1]
[3]]
###Markdown
WebCam demo
###Code
cap = cv2.VideoCapture(1)
if not cap.isOpened():
cap = cv2.VideoCapture(0)
if not cap.isOpened():
raise IOError("Cant open webcam")
font_scale = 3
font = cv2.FONT_HERSHEY_PLAIN
while True:
ret,frame = cap.read()
ClassIndex, confidence, bbox = model.detect(frame,confThreshold=0.55)
print(ClassIndex)
if (len(ClassIndex) != 0):
for ClassInd, conf, boxes in zip(ClassIndex.flatten(),confidence.flatten(),bbox):
if(ClassInd<80):
cv2.rectangle(frame,boxes,(255,0,0),2)
cv2.putText(frame,classlabels[ClassInd-1],(boxes[0]+10,boxes[1]+40),font,fontScale=font_scale,color = (0,255,0),thickness =3)
cv2.imshow('Object Detection Tutorial',frame)
if cv2.waitKey(2) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[42]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[28]]
[[87]
[ 1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]
[1]]
[[1]
[1]]
[[1]]
[[75]
[ 1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[ 1]
[41]]
[[77]
[ 1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[84]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[84]]
[[ 1]
[73]]
[[ 1]
[73]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[77]
[ 1]
[77]]
[[77]
[ 1]]
[[77]
[ 1]
[77]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]
[77]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[ 1]
[77]]
[[ 1]
[77]]
[[ 1]
[77]]
[[ 1]
[77]
[77]]
[[ 1]
[77]]
[[77]
[ 1]]
[[ 1]
[77]
[75]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[1]]
[[1]
[1]]
[[1]
[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]
[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[84]]
[[77]
[84]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[77]
[ 1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]
[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[ 1]
[41]]
[[ 1]
[41]]
[[1]]
[[ 1]
[77]]
[[ 1]
[77]]
[[77]
[ 1]]
[[ 1]
[77]]
[[77]
[ 1]]
[[77]
[ 1]
[77]]
[[ 1]
[75]]
[[1]]
[[ 1]
[41]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[ 1]
[77]]
[[1]]
[[ 1]
[77]]
[[1]]
[[1]]
[[1]]
[[ 1]
[44]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]
[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
[[1]]
|
examples/ndanielsen/Yellowbrick in the Flower Garden.ipynb | ###Markdown
Using Yellow Brick to Explore and Model the Famous Iris Dataset Exploration Notebook by:Nathan DanielsenPrema Damodaran Review of the iris dataset
###Code
# read the iris data into a DataFrame
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
###Output
_____no_output_____
###Markdown
Terminology- **150 observations** (n=150): each observation is one iris flower- **4 features** (p=4): sepal length, sepal width, petal length, and petal width- **Response**: iris species- **Classification problem** since response is categorical Lightly Preprocess the Dataset
###Code
# map each iris species to a number
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
###Output
_____no_output_____
###Markdown
Import the Good Stuff
###Code
import yellowbrick as yb
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
###Output
_____no_output_____
###Markdown
Feature Exploration with RadViz
###Code
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
###Output
_____no_output_____
###Markdown
Setosas tend to have the largest septal-width. This can could be a great predictor.Then, let's remove setosa from the training set and see fi we can find any differentiation between veriscolor and virginica. Remove Setosa from the training set
###Code
# Specify the features of interest and the classes of the target
iris_subset = iris[iris.species_num!=0]
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa','Iris-versicolor', 'Iris-virginica'] # but have to leave in more than two classes
# Extract the numpy arrays from the data frame
X = iris_subset[features].as_matrix()
y = iris_subset.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
###Output
_____no_output_____
###Markdown
Try the Covariance Visualizer
###Code
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
###Output
_____no_output_____
###Markdown
This covariance chart is not intereptatble as they don't have labels. Also there shouldn't be half numbers in labels. More Feature Exploration: Look at Parallel Coodinates for all Species
###Code
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
###Output
_____no_output_____
###Markdown
This clearly demonstrates the separation between features - especially petal_length and petal_width. One concern is that this demonstraction data might be obsured by the scaling of the features and add noise to the intepretation. Feature Exploration: ParallelCoordinates with Scaling
###Code
from sklearn import preprocessing
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
X_scaled = preprocessing.scale(X)
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X_scaled, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show()
###Output
_____no_output_____
###Markdown
The scaled dataset makes it easier to see the separation between classes for each of the features.*TODO - Add scaling option to PararalCordinates and potentially other visualizers Now that we have some features, Let's Evaluate Classifiers From the feature selection phase, we determined that petal_length and petal_width seem to have the best separation.
###Code
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
visualizer
###Output
_____no_output_____
###Markdown
Note: There seems to be some sort of bug in the draw/ fit methods. Let's try a naive bayes as the other didn't wrok
###Code
# Classifier Evaluation Imports
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = MultinomialNB()
visualizer = ClassificationReport(bayes)# classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
###Output
/usr/local/var/pyenv/versions/3.5.2/envs/yellowbrick/lib/python3.5/site-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
Model Selection: Random Forest Classification
###Code
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
test = pd.DataFrame(y_test, columns=['species'])
test.species.value_counts() # The test train split provides unbalanced classes
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import ClassificationReport
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
###Output
_____no_output_____
###Markdown
Using Yellow Brick to Explore and Model the Famous Iris Dataset Exploration Notebook by:Nathan DanielsenPrema Damodaran Review of the iris dataset
###Code
# read the iris data into a DataFrame
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
###Output
_____no_output_____
###Markdown
Terminology- **150 observations** (n=150): each observation is one iris flower- **4 features** (p=4): sepal length, sepal width, petal length, and petal width- **Response**: iris species- **Classification problem** since response is categorical Lightly Preprocess the Dataset
###Code
# map each iris species to a number
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
###Output
_____no_output_____
###Markdown
Import the Good Stuff
###Code
import yellowbrick as yb
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
###Output
_____no_output_____
###Markdown
Feature Exploration with RadViz
###Code
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
Setosas tend to have the largest septal-width. This can could be a great predictor.Then, let's remove setosa from the training set and see fi we can find any differentiation between veriscolor and virginica. Remove Setosa from the training set
###Code
# Specify the features of interest and the classes of the target
iris_subset = iris[iris.species_num!=0]
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa','Iris-versicolor', 'Iris-virginica'] # but have to leave in more than two classes
# Extract the numpy arrays from the data frame
X = iris_subset[features].as_matrix()
y = iris_subset.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
Try the Covariance Visualizer
###Code
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
This covariance chart is not intereptatble as they don't have labels. Also there shouldn't be half numbers in labels. More Feature Exploration: Look at Parallel Coodinates for all Species
###Code
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
This clearly demonstrates the separation between features - especially petal_length and petal_width. One concern is that this demonstraction data might be obsured by the scaling of the features and add noise to the intepretation. Feature Exploration: ParallelCoordinates with Scaling
###Code
from sklearn import preprocessing
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
X_scaled = preprocessing.scale(X)
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X_scaled, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof()
###Output
_____no_output_____
###Markdown
The scaled dataset makes it easier to see the separation between classes for each of the features.*TODO - Add scaling option to PararalCordinates and potentially other visualizers Now that we have some features, Let's Evaluate Classifiers From the feature selection phase, we determined that petal_length and petal_width seem to have the best separation.
###Code
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
visualizer
###Output
_____no_output_____
###Markdown
Note: There seems to be some sort of bug in the draw/ fit methods. Let's try a naive bayes as the other didn't wrok
###Code
# Classifier Evaluation Imports
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = MultinomialNB()
visualizer = ClassificationReport(bayes)# classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
###Output
/usr/local/var/pyenv/versions/3.5.2/envs/yellowbrick/lib/python3.5/site-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
Model Selection: Random Forest Classification
###Code
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
test = pd.DataFrame(y_test, columns=['species'])
test.species.value_counts() # The test train split provides unbalanced classes
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import ClassificationReport
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____ |
data/python2_iris.ipynb | ###Markdown
python2のanconda仮想環境で実行すると読めるね。
###Code
import iris
# 入った。
cube = iris.load("/Users/k-ikegami/Desktop/GRIB/weather/data/Z__C_RJTD_20170120090000_MSM_GPV_Rjp_L-pall_FH00-15_grib2.bin")
# 保存できた。
iris.save(cube, '/Users/k-ikegami/Desktop/GRIB/weather/data/test.grib2')
# 読み込める。
cube = iris.load("/Users/k-ikegami/Desktop/GRIB/weather/data/test.grib2")
print cube
###Output
0: geopotential_height / (m) (latitude: 253; longitude: 241)
|
3_Lists.ipynb | ###Markdown
Lists
###Code
shopping_List = ['Milk', 'Cheese', 'Butter']
print('Milk' in shopping_List)
print(shopping_List[-2])
###Output
True
Cheese
###Markdown
[1] Update/Insert a List
###Code
myList = [1,2,3,4,5,6,7]
print(myList)
# Insert method
myList.insert(0, 0)
print(myList)
#extend method
newList = [8,8,8]
myList.extend(newList)
print(myList)
###Output
[1, 2, 3, 4, 5, 6, 7]
[0, 1, 2, 3, 4, 5, 6, 7]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 8, 8]
###Markdown
[2] Slice/Delete from a list
###Code
myList = ['a','b','c','d','e','f']
myList[0:2] = ['x','y']
print(myList[:])
# pop method
myList.pop()
print(myList)
myList.pop(1)
print(myList)
# delete
del myList[1:3]
print(myList)
# remove method
myList.remove('e')
print(myList)
###Output
['x', 'y', 'c', 'd', 'e', 'f']
['x', 'y', 'c', 'd', 'e']
['x', 'c', 'd', 'e']
['x', 'e']
['x']
###Markdown
[3] Searching for an element in a list
###Code
myList = [10,20,30,40,50,60,70,80,90]
find = 20
if find in myList:
print(f"there is {find} in the list")
else:
print(f"there is no {find} in the list")
###Output
there is 20 in the list
###Markdown
[4] List Operations/Functions
###Code
# '+'
a = [1, 2, 3]
b = [4, 5, 6]
c = a + b
print(c)
# '*'
a = a * 3
print(a)
# 'max'
print(max(c))
# 'sum'
print(sum(c)/len(c))
sumList = []
while True:
inp = input("Put the number(finishing:type 'done'): ")
if inp == 'done':
print(f"average value: {sum(sumList)/len(sumList)}")
break
sumList.append(float(inp))
###Output
Put the number(finishing:type 'done'): 123
Put the number(finishing:type 'done'): 123
Put the number(finishing:type 'done'): done
average value: 123.0
###Markdown
[5] Strings and Lists
###Code
# list function
a = 'spam spam spam'
b = list(a)
print(b)
# split method
c = a.split(' ')
print(c)
# join method
d = ' '.join(c)
print(d)
###Output
['s', 'p', 'a', 'm', ' ', 's', 'p', 'a', 'm', ' ', 's', 'p', 'a', 'm']
['spam', 'spam', 'spam']
spam spam spam
###Markdown
[6] Common List pitfalls and ways to avoid them
###Code
myList = [2,4,3,1,5,7]
orig = myList[:]
myList.sort()
print(orig)
print(myList)
###Output
[2, 4, 3, 1, 5, 7]
[1, 2, 3, 4, 5, 7]
###Markdown
Tuple Create Tuple
###Code
newTuple = 'a','b','c','d','e'
newTuple1 = tuple('abcde')
print(newTuple)
print(newTuple1)
###Output
('a', 'b', 'c', 'd', 'e')
('a', 'b', 'c', 'd', 'e')
###Markdown
Search for an element in Tuple
###Code
newTuple = ('a','b','c','d','e')
print('f' in newTuple)
def searchTuple(pTuple, element):
for i in pTuple:
if i == element:
return pTuple.index(i)
return 'The element does not exist'
print(searchTuple(newTuple, 'c'))
###Output
False
2
###Markdown
Tuple Operations / Functions
###Code
myTuple = (1,4,3,2,5)
myTuple1 = (1,2,6,9,8,7)
# Concatenate
myTuptle2 = myTuple + myTuple1
print(myTuptle2)
###Output
(1, 4, 3, 2, 5, 1, 2, 6, 9, 8, 7)
|
PyCitySchools/.ipynb_checkpoints/PyCitySchools-checkpoint.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results Top Performing Schools (By Passing Rate) * Sort and display the top five schools in overall passing rate Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting Reading Score by Grade * Perform the same operations as above for reading scores Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# Sample bins. Feel free to create your own bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete.head()
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
#Group the schools
schoolDataGrouped = school_data_complete.groupby('school_name')
#Find the total number of schools
totalSchools = len(schoolDataGrouped)
#Find the total number of students by finding the lenght of the DataFrame
totalStudents = len(school_data_complete['student_name'])
#To find the total budgetWe can group by school name, apply first on budget so we only take the first value of each school and them sum the values.
totalBudget = schoolDataGrouped['budget'].first().sum()
#Overall - Average scores.
#Reading
avgReading = school_data_complete['reading_score'].mean()
#Math
avgMath = school_data_complete['math_score'].mean()
# % of students passing Math and Reading.
#math
count_studentsPassinMath = school_data_complete[school_data_complete['math_score'] >= 70]['math_score'].count()
pct_studentsPassinMath = (count_studentsPassinMath / totalStudents) * 100
#Reading
count_studentsPassinReading = school_data_complete[school_data_complete['reading_score'] >= 70]['reading_score'].count()
pct_studentsPassinReading = (count_studentsPassinReading / totalStudents) * 100
#Overall passing grade
overallGrade = ((avgReading + avgMath) / 2)
summary_dict = {'Total Schools' : totalSchools,
'Total Students' : totalStudents,
'Total Budget' : totalBudget,
'Average Math Score' : avgMath,
'Average Reading Score' : avgReading,
'% Passing Math' : pct_studentsPassinMath,
'% Passing Reading' : pct_studentsPassinReading,
'% Overall Passing Rate' : overallGrade
}
#Transform dictionary into a DataFrame and set the index to ' ' to get rid of the default index.
summary_df = pd.DataFrame(summary_dict, index = [''])
#Formatt.
summary_df['Total Students'] = summary_df['Total Students'].map("{:,.0f}".format)
summary_df['Total Budget'] = summary_df['Total Budget'].map("${:,.2f}".format)
summary_df['Average Math Score'] = summary_df['Average Math Score'].map("{:,.2f}%".format)
summary_df['Average Reading Score'] = summary_df['Average Reading Score'].map("{:,.2f}%".format)
summary_df['% Passing Math'] = summary_df['% Passing Math'].map("{:,.2f}%".format)
summary_df['% Passing Reading'] = summary_df['% Passing Reading'].map("{:,.2f}%".format)
summary_df['% Overall Passing Rate'] = summary_df['% Overall Passing Rate'].map("{:,.2f}%".format)
#Summary DataFrame
summary_df
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results Top Performing Schools (By Passing Rate) * Sort and display the top five schools in overall passing rate
###Code
topPerformingGroup = school_data_complete.groupby('school_name')
#School Type
topSchoolType = topPerformingGroup['type'].first()
#Find the total number of students by finding the lenght of the DataFrame
topTotalStudents = topPerformingGroup['student_name'].count()
#To find the total budgetWe can group by school name, apply first on budget so we only take the first value of each school and them sum the values.
topTotalBudget = topPerformingGroup['budget'].first()
#per student budget
topPerStudentBudget = topTotalBudget / topTotalStudents
#Overall - Average scores.
#Math
topAvgMath = topPerformingGroup['math_score'].mean()
#Reading
topAvgReading = topPerformingGroup['reading_score'].mean()
# % of students passing Math and Reading.
#math
#Filter the students with math scores higher than 70 and then group them by school and picj student name to retrn the count of the number of studdents passing math..
topCount_studentsPassinMath = school_data_complete[school_data_complete['math_score'] >= 70].groupby('school_name')['student_name'].count()
topPct_studentsPassinMath = (topCount_studentsPassinMath / topTotalStudents) * 100
#Reading
topCount_studentsPassinReading = school_data_complete[school_data_complete['reading_score'] >= 70].groupby('school_name')['student_name'].count()
topPct_studentsPassinReading = (topCount_studentsPassinReading / topTotalStudents) * 100
#Overall passing grade --> from average scores of % of students passing Math and Reading.
topOverallGrade = ((topPct_studentsPassinReading + topPct_studentsPassinMath) / 2)
topSummary_dict = {'School Type' : topSchoolType,
'Total Students' : topTotalStudents,
'Total Budget' : topTotalBudget,
'Per Student Budget' : topPerStudentBudget,
'Average Math Score' : topAvgMath,
'Average Reading Score' : topAvgReading,
'% Passing Math' : topPct_studentsPassinMath,
'% Passing Reading' : topPct_studentsPassinReading,
'% Overall Passing Rate' : topOverallGrade
}
topSummary_df = pd.DataFrame(topSummary_dict).sort_values('% Overall Passing Rate',ascending = False)
#Format
topSummary_df['Total Students'] = topSummary_df['Total Students'].map("{:,.0f}".format)
topSummary_df['Total Budget'] = topSummary_df['Total Budget'].map("${:,.2f}".format)
topSummary_df['Per Student Budget'] = topSummary_df['Per Student Budget'].map("${:,.2f}".format)
topSummary_df['Average Math Score'] = topSummary_df['Average Math Score'].map("{:,.2f}%".format)
topSummary_df['Average Reading Score'] = topSummary_df['Average Reading Score'].map("{:,.2f}%".format)
topSummary_df['% Passing Math'] = topSummary_df['% Passing Math'].map("{:,.2f}%".format)
topSummary_df['% Passing Reading'] = topSummary_df['% Passing Reading'].map("{:,.2f}%".format)
topSummary_df['% Overall Passing Rate'] = topSummary_df['% Overall Passing Rate'].map("{:,.2f}%".format)
topSummary_df.head()
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools
###Code
worstSummary_df = topSummary_df.sort_values('% Overall Passing Rate')
worstSummary_df.head()
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
#Math
#Take the original data frame, sort the rows with each grade and select all the columns then groupe by the school name and select the mean of the math score.
grade9 = school_data_complete.loc[school_data_complete['grade'] == '9th', :].groupby('school_name')['math_score'].mean()
grade10 = school_data_complete.loc[school_data_complete['grade'] == '10th', :].groupby('school_name')['math_score'].mean()
grade11 = school_data_complete.loc[school_data_complete['grade'] == '11th', :].groupby('school_name')['math_score'].mean()
grade12 = school_data_complete.loc[school_data_complete['grade'] == '12th', :].groupby('school_name')['math_score'].mean()
grade_dict = {'9th' : grade9,
'10th' : grade10,
'11th' : grade11,
'12th' : grade12
}
grade_df = pd.DataFrame(grade_dict)
#Format.
grade_df['9th'] = grade_df['9th'].map("{:,.2f}%".format)
grade_df['10th'] = grade_df['10th'].map("{:,.2f}%".format)
grade_df['11th'] = grade_df['11th'].map("{:,.2f}%".format)
grade_df['12th'] = grade_df['12th'].map("{:,.2f}%".format)
grade_df
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
#Reading
#Take the original data frame, sort the rows with each grade and select all the columns then groupe by the school name and select the mean of the math score.
Rgrade9 = school_data_complete.loc[school_data_complete['grade'] == '9th', :].groupby('school_name')['reading_score'].mean()
Rgrade10 = school_data_complete.loc[school_data_complete['grade'] == '10th', :].groupby('school_name')['reading_score'].mean()
Rgrade11 = school_data_complete.loc[school_data_complete['grade'] == '11th', :].groupby('school_name')['reading_score'].mean()
Rgrade12 = school_data_complete.loc[school_data_complete['grade'] == '12th', :].groupby('school_name')['reading_score'].mean()
rGrade_dict = {'9th' : Rgrade9,
'10th' : Rgrade10,
'11th' : Rgrade11,
'12th' : Rgrade12
}
rGrade_df = pd.DataFrame(rGrade_dict)
#Format.
rGrade_df['9th'] = rGrade_df['9th'].map("{:,.2f}%".format)
rGrade_df['10th'] = rGrade_df['10th'].map("{:,.2f}%".format)
rGrade_df['11th'] = rGrade_df['11th'].map("{:,.2f}%".format)
rGrade_df['12th'] = rGrade_df['12th'].map("{:,.2f}%".format)
rGrade_df
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 584, 614, 644, 675] #<-------change bins
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
scoresSchool = pd.DataFrame(topSummary_dict) #Create a new DataFrame wihtout formating
scoresSchool['Spending Ranges (Per Student)'] = pd.cut(scoresSchool['Per Student Budget'], spending_bins, labels = group_names) #T
groupedScores = scoresSchool.groupby('Spending Ranges (Per Student)')
scoresAvgMath = groupedScores['Average Math Score'].mean()
scoresAvgReading = groupedScores['Average Reading Score'].mean()
scorePctMath = groupedScores['% Passing Math'].mean()
scorePctReading = groupedScores['% Passing Reading'].mean()
scorePctOverllPassing = groupedScores['% Overall Passing Rate'].mean()
scores_dict = {'Average Math Score' : scoresAvgMath,
'Average Reading Score' : scoresAvgReading,
'% Passing Math' : scorePctMath,
'% Passing Reading' : scorePctReading,
'% Overall Passing Rate' : scorePctOverllPassing}
scores_df = pd.DataFrame(scores_dict)
#Format
scores_df['Average Math Score'] = scores_df['Average Math Score'].map("{:,.2f}".format)
scores_df['Average Reading Score'] = scores_df['Average Reading Score'].map("{:,.2f}".format)
scores_df['% Passing Math'] = scores_df['% Passing Math'].map("{:,.2f}%".format)
scores_df['% Passing Reading'] = scores_df['% Passing Reading'].map("{:,.2f}%".format)
scores_df['% Overall Passing Rate'] = scores_df['% Overall Passing Rate'].map("{:,.2f}%".format)
scores_df
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# Sample bins. Feel free to create your own bins.
size_bins = [0, 999, 1999, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
schoolSizeScore = pd.DataFrame(topSummary_dict)
schoolSizeScore['School Size'] = pd.cut(schoolSizeScore['Total Students'], size_bins, labels = group_names)
groupedSchoolSize = schoolSizeScore.groupby('School Size')
sizeMathScore = groupedSchoolSize['Average Math Score'].mean()
sizeReadingScore = groupedSchoolSize['Average Reading Score'].mean()
sizePctMath = groupedSchoolSize['% Passing Math'].mean()
sizePctReading = groupedSchoolSize['% Passing Reading'].mean()
sizePctOverllPassing = groupedSchoolSize['% Overall Passing Rate'].mean()
size_dict = {'Average Math Score' : sizeMathScore,
'Average Reading Score' : sizeReadingScore,
'% Passing Math' : sizePctMath,
'% Passing Reading' : sizePctReading,
'% Overall Passing Rate' : sizePctOverllPassing}
size_df = pd.DataFrame(size_dict)
#Format
size_df['Average Math Score'] = size_df['Average Math Score'].map("{:,.2f}".format)
size_df['Average Reading Score'] = size_df['Average Reading Score'].map("{:,.2f}".format)
size_df['% Passing Math'] = size_df['% Passing Math'].map("{:,.2f}%".format)
size_df['% Passing Reading'] = size_df['% Passing Reading'].map("{:,.2f}%".format)
size_df['% Overall Passing Rate'] = size_df['% Overall Passing Rate'].map("{:,.2f}%".format)
size_df
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type.
###Code
groupedSchoolType = schoolSizeScore.groupby('School Type')
typeMathScore = groupedSchoolType['Average Math Score'].mean()
typeReadingScore = groupedSchoolType['Average Reading Score'].mean()
typePctMath = groupedSchoolType['% Passing Math'].mean()
typePctReading = groupedSchoolType['% Passing Reading'].mean()
typePctOverllPassing = groupedSchoolType['% Overall Passing Rate'].mean()
type_dict = {'Average Math Score' : typeMathScore,
'Average Reading Score' : typeReadingScore,
'% Passing Math' : typePctMath,
'% Passing Reading' : typePctReading,
'% Overall Passing Rate' : typePctOverllPassing}
type_df = pd.DataFrame(type_dict)
#Format
type_df['Average Math Score'] = type_df['Average Math Score'].map("{:,.2f}".format)
type_df['Average Reading Score'] = type_df['Average Reading Score'].map("{:,.2f}".format)
type_df['% Passing Math'] = type_df['% Passing Math'].map("{:,.2f}%".format)
type_df['% Passing Reading'] = type_df['% Passing Reading'].map("{:,.2f}%".format)
type_df['% Overall Passing Rate'] = type_df['% Overall Passing Rate'].map("{:,.2f}%".format)
type_df
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
district_true = school_data_complete['type'] == 'District'
district_data = school_data_complete[district_true]
district_data.head()
# Make new dataframe and populate it with corresponding values
district_summary= pd.DataFrame([0])
# Calculate the total number of schools
# Calculate the total number of students
district_summary["Number of Schools"] = len(district_data['School ID'].value_counts())
district_summary["Number of Students"] = district_data['Student ID'].count()
# Calculate the total budget
budget_vals = district_data['budget'].unique()
district_summary["Total Budget"] = budget_vals.sum()
# Calculate the average math score
math_score = district_data["math_score"]
district_summary["Average Math Score"] = math_score.mean()
# Calculate the average reading score
reading_score = district_data["reading_score"]
district_summary["Average Reading Score"] = reading_score.mean()
# Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2
district_summary["Overall Average Score"] = (reading_score + math_score)/2
# Calculate the percentage of students with a passing math score (70 or greater)
math_score = district_data["math_score"]
district_summary["% Passing Math"] = (math_score >= 70).mean() * 100
# Calculate the percentage of students with a passing reading score (70 or greater)
passing_reading_score = district_data["reading_score"]
district_summary["% Passing Reading"] = (passing_reading_score >= 70).mean() * 100
district_summary = district_summary.drop([0], axis=1)
district_summary
###Output
_____no_output_____
###Markdown
School Summary* Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results
###Code
# Create an overview table that summarizes key metrics about each school, including:
schools_summary= school_data_complete.drop(columns=['Student ID','student_name', 'gender', 'grade', 'School ID'])
schools_summary = schools_summary.groupby(['school_name', 'type']).mean()
schools_summary = schools_summary.reset_index(drop=False)
schools_summary = schools_summary.set_index('school_name')
# Total Students
# Total School Budget
# Per Student Budget
# Average Reading Score
schools_summary = schools_summary.rename(columns={"type": "School Type", "reading_score" : "Average Reading Score", "math_score"
: "Average Math Score", "size": "Total Students", "budget": "Total School Budget"})
budget = schools_summary['Total School Budget'].values
students = schools_summary['Total Students'].values
schools_summary['Per Student Budget'] = budget/students
# % Passing Math
schools_summary2 = school_data_complete
passing_math = school_data_complete.loc[schools_summary2['math_score']>69,:]
passing_math = passing_math.groupby('school_name').math_score.count().reset_index()
passing_math = passing_math.rename(columns={"math_score":"% Passing Math"})
# Merge the two dataframes
schools_summary = passing_math.merge(schools_summary, on="school_name")
schools_summary['% Passing Math'] = (schools_summary['% Passing Math'] / schools_summary['Total Students']) * 100
# % Passing Reading
schools_summary2 = school_data_complete
passing_reading = school_data_complete.loc[schools_summary2['reading_score']>69,:]
passing_reading = passing_reading.groupby('school_name').reading_score.count().reset_index()
passing_reading = passing_reading.rename(columns={"reading_score":"% Passing Reading"})
schools_summary = passing_reading.merge(schools_summary, on="school_name")
schools_summary['% Passing Reading'] = (schools_summary['% Passing Reading'] / schools_summary['Total Students']) * 100
# Overall Passing Rate (Average of the above two)
schools_summary['% Overall Passing'] = (schools_summary['% Passing Math'] + schools_summary['% Passing Reading']) / 2
schools_summary = schools_summary.set_index('school_name')
schools_summary = schools_summary.rename_axis("")
schools_summary
###Output
_____no_output_____
###Markdown
Top Performing Schools (By Passing Rate)* Sort and display the top five schools in overall passing rate
###Code
top_schools = schools_summary.sort_values(by='% Overall Passing', ascending=False).head()
top_schools = top_schools.rename_axis("")
top_schools
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools
###Code
bottom_schools = schools_summary.sort_values(by='% Overall Passing', ascending=True).head()
bottom_schools = bottom_schools.rename_axis("")
bottom_schools
###Output
_____no_output_____
###Markdown
Math Scores By Grade * Create a table that lists the average Math Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
# Create a table that displays each school's math grade by grade level
math_scores_by_grade = school_data_complete.drop(columns=['Student ID','student_name', 'gender', 'School ID', 'size', 'budget', 'reading_score'])
# Find averages
math_scores_by_grade = math_scores_by_grade.groupby(['school_name', 'grade']).mean()
# Reset index to make it more clear
math_scores_by_grade = math_scores_by_grade.reset_index(drop=False)
math_scores_by_grade = math_scores_by_grade.set_index('school_name')
# Pivot table to display grade index as columns
math_scores_by_grade = math_scores_by_grade.pivot(columns='grade', values='math_score')
math_scores_by_grade = math_scores_by_grade.rename_axis("", axis=0)
math_scores_by_grade = math_scores_by_grade.rename_axis("", axis=1)
math_scores_by_grade
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
# Create a table that displays each school's reading grade by grade level
reading_scores_by_grade = school_data_complete.drop(columns=['Student ID','student_name', 'gender', 'School ID', 'size', 'budget', 'math_score'])
# Find averages
reading_scores_by_grade = reading_scores_by_grade.groupby(['school_name', 'grade']).mean()
# Reset index to make it more clear
reading_scores_by_grade = reading_scores_by_grade.reset_index(drop=False)
reading_scores_by_grade = reading_scores_by_grade.set_index('school_name')
# Pivot table to display grade index as columns
reading_scores_by_grade = reading_scores_by_grade.pivot(columns='grade', values='reading_score')
reading_scores_by_grade = reading_scores_by_grade.rename_axis("", axis=0)
reading_scores_by_grade = reading_scores_by_grade.rename_axis("", axis=1)
reading_scores_by_grade
###Output
_____no_output_____
###Markdown
Scores by School Spending* Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
school_spending = schools_summary[['Average Math Score', 'Average Reading Score', '% Passing Reading', '% Passing Math', '% Overall Passing', 'Per Student Budget']]
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
school_spending["Spending Ranges (Per Student)"] = pd.cut(school_spending["Per Student Budget"], spending_bins, labels=group_names)
school_spending = school_spending.drop(columns=['Per Student Budget'])
school_spending = school_spending.groupby(school_spending["Spending Ranges (Per Student)"], as_index=True)
# school_spending = school_spending.set_index('Spending Ranges (Per Student)').mean()
school_spending.mean()
###Output
C:\Users\megam\Anaconda3\envs\PythonData\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
###Markdown
Scores by School Size* Perform the same operations as above, based on school size.
###Code
# Sample bins. Feel free to create your own bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
school_size = schools_summary[['Average Math Score', 'Average Reading Score', '% Passing Reading', '% Passing Math', '% Overall Passing', 'Total Students']]
school_size["Size"] = pd.cut(school_size["Total Students"], size_bins, labels=group_names)
school_size = school_size.drop(columns=['Total Students'])
school_size = school_size.groupby(school_size["Size"], as_index=True)
# school_size = school_size.set_index('Total Students').mean()
school_size.mean()
###Output
C:\Users\megam\Anaconda3\envs\PythonData\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
###Markdown
Scores by School Type* Perform the same operations as above, based on school type.
###Code
schools_summary = schools_summary.rename_axis("school_name", axis=1)
schools_summary = schools_summary.reset_index()
school_type = schools_summary[['Average Math Score', 'Average Reading Score', '% Passing Reading', '% Passing Math', '% Overall Passing', 'school_name']]
school_type = school_type.merge(school_data_complete, on='school_name')
school_type["School Type"] = pd.cut(school_type["type"], type_bins, labels=group_names)
school_type = school_type.drop(columns=['type'])
school_type = school_type.groupby(school_type["School Type"], as_index=True)
# school_type = school_type.set_index('School Type').mean()
school_type.mean()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete.head()
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Calculate the percentage of students who passed math **and** reading (% Overall Passing)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
#Calculate the total number of schools
num_of_schools = school_data['school_name'].count()
print(num_of_schools )
#Calculate the total number of students
num_of_students = student_data['Student ID'].count()
print(num_of_students)
#Calculate the total budget
total_budget = school_data['budget'].sum()
print(total_budget)
#Calculate the average math score
avg_math_score = school_data_complete['math_score'].mean()
print(avg_math_score)
#Calculate the average reading score
avg_reading_score = school_data_complete['reading_score'].mean()
print(avg_reading_score)
#Calculate the percentage of students with a passing math score (70 or greater)
pass_math = school_data_complete[(school_data_complete['math_score'] >= 70)].count() ['student_name']
print(pass_math)
math_percent = (pass_math / float(num_of_students))*100
print(math_percent)
#Calculate the percentage of students with a passing reading score (70 or greater)
pass_reading = school_data_complete[(school_data_complete['reading_score'] >= 70)].count() ['student_name']
print(pass_reading)
reading_percent = (pass_reading / float(num_of_students))*100
print(reading_percent)
#Calculate the percentage of students who passed math **and** reading (% Overall Passing)
pass_math_reading = school_data_complete[(school_data_complete['math_score'] >= 70) & (school_data_complete['reading_score'] >= 70) ].count() ['student_name']
print(pass_math_reading)
math_reading_percent = (pass_math_reading / float(num_of_students))*100
print(math_reading_percent)
#Create a dataframe to hold the above results
#Optional: give the displayed data cleaner formatting
district_summary = pd.DataFrame ({'total_schools': [num_of_schools],'total_students': [num_of_students],
'total_budget': [total_budget], 'avg_math_score': [avg_math_score],
'avg_reading_score': [avg_reading_score],'percentage_pass_math': [math_percent],
'percentage_pass_reading': [reading_percent], 'overall pass percent': [math_reading_percent]
})
district_summary['total_students'] = district_summary['total_students'].map("{:,}".format)
district_summary['total_budget'] = district_summary['total_budget'].map("${:,.2f}".format)
district_summary
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results
###Code
#School Summary - School name
school_summary = school_data_complete.groupby("school_name")
print(school_summary["school_name"].unique())
#school Type
school_type = school_data.set_index(["school_name"])['type']
print(school_type)
#Total number of students per school
total_students = school_data_complete.groupby(["school_name"]).count()['Student ID']
print(total_students)
#Total School Budget
total_school_budget = school_data_complete.groupby(["school_name"]).mean()['budget']
print(total_school_budget)
#Per Student Budget
per_student_budget = total_school_budget/total_students
print(per_student_budget)
#Average Math score and Passing Percecntage
avg_math_score_per_student = school_summary['math_score'].mean()
print(avg_math_score_per_student)
passing_math = school_data_complete[(school_data_complete['math_score'] >= 70)]
print(passing_math)
percent_passing_math = (passing_math.groupby(["school_name"]).count()['Student ID'] / total_students)*100
print(percent_passing_math)
#Average Reading score and Passing Percentage
avg_reading_score_per_student = school_summary['reading_score'].mean()
print(avg_reading_score_per_student)
passing_reading = school_data_complete[(school_data_complete['reading_score'] >= 70)]
print(passing_reading)
percent_passing_reading = (passing_reading.groupby(["school_name"]).count()['Student ID'] / total_students)*100
print(percent_passing_reading)
#Overall Passing Percentage
overall_passing = school_data_complete[(school_data_complete['math_score'] >= 70) & (school_data_complete['reading_score'] >= 70)]
print(overall_passing)
overall_passing_percent = (overall_passing.groupby(["school_name"]).count()['Student ID'] / total_students)*100
print(overall_passing_percent)
schools_summary = pd.DataFrame ({'School Type': school_type,'Total students': total_students,
'Total School Budget': total_school_budget,
'Per Student Budget': per_student_budget,
'Average Math Score': avg_math_score_per_student,
'Average Reading Score': avg_reading_score_per_student,
'% Passing Math': percent_passing_math,
'% Passing Reading': percent_passing_reading,
'% Overall Passing': overall_passing_percent
})
schools_summary['Total School Budget'] = schools_summary['Total School Budget'].map("${:,.2f}".format)
schools_summary['Per Student Budget'] = schools_summary['Per Student Budget'].map("${:.2f}".format)
schools_summary
###Output
_____no_output_____
###Markdown
Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing.
###Code
top_performing = schools_summary.sort_values("% Overall Passing", ascending = False)
top_performing.head()
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing.
###Code
bottom_performing = schools_summary.sort_values("% Overall Passing")
bottom_performing.head()
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
ninth_grade_math = student_data.loc[student_data['grade'] == '9th'].groupby('school_name')["math_score"].mean()
tenth_grade_math = student_data.loc[student_data['grade'] == '10th'].groupby('school_name')["math_score"].mean()
eleventh_grade_math = student_data.loc[student_data['grade'] == '11th'].groupby('school_name')["math_score"].mean()
twelvth_grade_math = student_data.loc[student_data['grade'] == '12th'].groupby('school_name')["math_score"].mean()
math_scores_grade = pd.DataFrame({
"9th": ninth_grade_math,
"10th": tenth_grade_math,
"11th": eleventh_grade_math,
"12th": twelvth_grade_math
})
math_scores_grade.head(15)
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
ninth_grade_reading = student_data.loc[student_data['grade'] == '9th'].groupby('school_name')["reading_score"].mean()
tenth_grade_reading = student_data.loc[student_data['grade'] == '10th'].groupby('school_name')["reading_score"].mean()
eleventh_grade_reading = student_data.loc[student_data['grade'] == '11th'].groupby('school_name')["reading_score"].mean()
twelvth_grade_reading = student_data.loc[student_data['grade'] == '12th'].groupby('school_name')["reading_score"].mean()
reading_scores_grade = pd.DataFrame({
"9th": ninth_grade_reading,
"10th": tenth_grade_reading,
"11th": eleventh_grade_reading,
"12th": twelvth_grade_reading
})
reading_scores_grade.head(15)
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
bins = [0,585,630,645,675]
group_names = ["< $585","$585 - $629","$630 - $644","$645 - $675"]
school_data_complete['Spending Ranges (Per Student)'] = pd.cut(school_data_complete['budget']/school_data_complete['size'], bins, labels = group_names)
score_by_budget = school_data_complete.groupby('Spending Ranges (Per Student)')
avg_math = score_by_budget['math_score'].mean()
avg_read = score_by_budget['reading_score'].mean()
pass_math = school_data_complete[school_data_complete['math_score'] >= 70].groupby('Spending Ranges (Per Student)')['Student ID'].count()/score_by_budget['Student ID'].count() * 100
pass_read = school_data_complete[school_data_complete['reading_score'] >= 70].groupby('Spending Ranges (Per Student)')['Student ID'].count()/score_by_budget['Student ID'].count() * 100
overall = school_data_complete[(school_data_complete['math_score'] >= 70) & (school_data_complete['reading_score'] >= 70)].groupby('Spending Ranges (Per Student)')['Student ID'].count()/score_by_budget['Student ID'].count() * 100
scores_by_budget = pd.DataFrame({
"Average Math Score": avg_math,
"Average Reading Score": avg_read,
"% Passing Math": pass_math,
"% Passing Reading": pass_read,
"% Overall Passing": overall
})
scores_by_budget['Average Math Score'] = scores_by_budget['Average Math Score'].map("{:,.2f}".format)
scores_by_budget['Average Reading Score'] = scores_by_budget['Average Reading Score'].map("{:,.2f}".format)
scores_by_budget['% Passing Math'] = scores_by_budget['% Passing Math'].map("{:,.2f}".format)
scores_by_budget['% Passing Reading'] = scores_by_budget['% Passing Reading'].map("{:,.2f}".format)
scores_by_budget['% Overall Passing'] = scores_by_budget['% Overall Passing'].map("{:,.2f}".format)
scores_by_budget
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
bins = [0, 1000, 2000, 5000]
group_names = ["Small(<1000)", "Medium (1000 - 2000)" , "Large (2000 - 5000)"]
school_data_complete['School Size'] = pd.cut(school_data_complete['size'], bins, labels = group_names)
score_by_size = school_data_complete.groupby('School Size')
avg_math = score_by_size['math_score'].mean()
avg_read = score_by_size['reading_score'].mean()
pass_math = school_data_complete[school_data_complete['math_score'] >= 70].groupby('School Size')['Student ID'].count()/score_by_size['Student ID'].count() * 100
pass_read = school_data_complete[school_data_complete['reading_score'] >= 70].groupby('School Size')['Student ID'].count()/score_by_size['Student ID'].count() * 100
overall = school_data_complete[(school_data_complete['math_score'] >= 70) & (school_data_complete['reading_score'] >= 70)].groupby('School Size')['Student ID'].count()/score_by_size['Student ID'].count() * 100
scores_by_size = pd.DataFrame({
"Average Math Score": avg_math,
"Average Reading Score": avg_read,
"% Passing Math": pass_math,
"% Passing Reading": pass_read,
"% Overall Passing ": overall
})
scores_by_size
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type
###Code
score_by_type = school_data_complete.groupby('type')
avg_math = score_by_type['math_score'].mean()
avg_read = score_by_type['reading_score'].mean()
pass_math = school_data_complete[school_data_complete['math_score'] >= 70].groupby('type')['Student ID'].count()/score_by_type['Student ID'].count() * 100
pass_read = school_data_complete[school_data_complete['reading_score'] >= 70].groupby('type')['Student ID'].count()/score_by_type['Student ID'].count() * 100
overall = school_data_complete[(school_data_complete['math_score'] >= 70) & (school_data_complete['reading_score'] >= 70)].groupby('type')['Student ID'].count()/score_by_type['Student ID'].count() * 100
scores_by_type = pd.DataFrame({
"Average Math Score": avg_math,
"Average Reading Score": avg_read,
"% Passing Math": pass_math,
"% Passing Reading": pass_read,
"% Overall Passing": overall})
scores_by_type.index.names = ['School Type']
scores_by_type
###Output
_____no_output_____ |
messy_vs_clean_room.ipynb | ###Markdown
###Code
import keras
keras.__version__
import tensorflow as tf
print("GPU Available: ", tf.config.list_physical_devices('GPU'))
###Output
GPU Available: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
We download the data
###Code
FILEID='17BB2Ufj-9rTnT9cwZR8fu_sKKGdCXLxM'
FILENAME='train.zip'
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=$FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt
ls -lh
!unzip -q train.zip -d kaggle_original_data
!rm -r kaggle_original_data/__MACOSX/
!ls -l kaggle_original_data | head
###Output
total 8
drwxr-xr-x 2 root root 4096 Jul 22 20:17 clean
drwxr-xr-x 2 root root 4096 Jul 22 20:17 messy
###Markdown
We now are going to sort the images by separating them into different folders.
###Code
from random import shuffle
import os, shutil
# list all labels
label_dirs = ['clean', 'messy']
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/content/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/content/messy_vs_clean_room'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training/validation/test label pictures
for target_dir in [train_dir, validation_dir, test_dir]:
for label in label_dirs:
dir = os.path.join(target_dir, label)
os.mkdir(dir)
# Copy 70% of each label to train, 15% to valid, and 15% to test directories
for label in label_dirs:
fnames = os.listdir(os.path.join(original_dataset_dir, label))
shuffle(fnames) # shuffling the list
n_img_start_valid = int(len(fnames)*0.7)
n_img_start_test = int(len(fnames)*0.85)
for fname in fnames[:n_img_start_valid]: # train
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(train_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_valid:n_img_start_test]: # valid
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(validation_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_test:]: # test
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(test_dir, label, fname)
shutil.copyfile(src, dst)
###Output
_____no_output_____
###Markdown
As a sanity check, let's count how many pictures we have in each training split (train / validation / test):
###Code
total_train_imgs = 0
total_valid_imgs = 0
for label in label_dirs:
print('total images for label', label, 'in training:', len(os.listdir(os.path.join(train_dir, label))),
'in valid:', len(os.listdir(os.path.join(validation_dir, label))),
'in test:', len(os.listdir(os.path.join(test_dir, label))))
total_train_imgs += len(os.listdir(os.path.join(train_dir, label)))
total_valid_imgs += len(os.listdir(os.path.join(validation_dir, label)))
print('Total number of training images:', total_train_imgs)
print('Total number of validation images:', total_valid_imgs)
###Output
Total number of training images: 148
Total number of validation images: 32
###Markdown
Building our networkWe've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same general structure: our convnet will be a stack of alternated `Conv2D` (with `relu` activation) and `MaxPooling2D` layers.However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one more `Conv2D` + `MaxPooling2D` stage. This serves both to augment the capacity of the network, and to further reduce the size of the feature maps, so that they aren't overly large when we reach the `Flatten` layer. Here, since we start from inputs of size 150x150 (a somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the `Flatten` layer.Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.Since we are attacking a binary classification problem, we are ending the network with a single unit (a `Dense` layer of size 1) and a `sigmoid` activation. This unit will encode the probability that the network is looking at one class or the other. Original Model (4 Conv2D + MaxPooling2D layers)
###Code
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_1.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
###Markdown
---
###Code
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_aug.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
---
###Code
# remove 2 layers
smaller_model = models.Sequential()
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
#smaller_model.add(layers.Conv2D(128, (3, 3), activation='relu'))
#smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Flatten())
smaller_model.add(layers.Dropout(0.5))
smaller_model.add(layers.Dense(64, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
smaller_model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = smaller_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
smaller_model.save('clean_and_messy_smaller.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = smaller_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.78125
###Markdown
---
###Code
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
conv_base.summary()
from keras import models
from keras import layers
VGG16_model = models.Sequential()
VGG16_model.add(conv_base)
VGG16_model.add(layers.Flatten())
VGG16_model.add(layers.Dense(256, activation='relu'))
VGG16_model.add(layers.Dense(1, activation='sigmoid'))
VGG16_model.summary()
print('This is the number of trainable weights '
'before freezing the conv base:', len(VGG16_model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights '
'after freezing the conv base:', len(VGG16_model.trainable_weights))
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = VGG16_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
VGG16_model.save('clean_and_messy_VGG16.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.96875
###Markdown
---
###Code
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
from keras import models
from keras import layers
VGG16_FT_model = models.Sequential()
VGG16_FT_model.add(conv_base)
VGG16_FT_model.add(layers.Flatten())
VGG16_FT_model.add(layers.Dense(256, activation='relu'))
VGG16_FT_model.add(layers.Dense(1, activation='sigmoid'))
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_FT_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = VGG16_FT_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
VGG16_FT_model.save('clean_and_messy_VGG16_FT.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
def smooth_curve(points, factor=0.8):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
plt.plot(epochs,
smooth_curve(acc), 'bo', label='Smoothed training acc')
plt.plot(epochs,
smooth_curve(val_acc), 'b', label='Smoothed validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,
smooth_curve(loss), 'bo', label='Smoothed training loss')
plt.plot(epochs,
smooth_curve(val_loss), 'b', label='Smoothed validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_FT_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
import numpy as np
ytest_dir = '/content/messy_vs_clean_room/ytest'
from keras.preprocessing.image import ImageDataGenerator
ytest_datagen = ImageDataGenerator(rescale=1./255)
ytest_generator = ytest_datagen.flow_from_directory(
ytest_dir,
target_size=(150, 150),
batch_size=1,
class_mode='binary')
pred = VGG16_model.predict_generator(ytest_generator, verbose=1)
predicted_class_indices = np.argmax(pred, axis=1)
labels = (train_generator.class_indices)
label = dict((v,k) for k,v in labels.items())
# 建立代码标签与真实标签的关系
predictions = [label[i] for i in predicted_class_indices]
predictions
#VGG16_model.predict_classes(ytest_generator, batch_size=len(ytest_generator), verbose=0)
#ytest_loss, ytest_acc = VGG16_model.evaluate_generator(ytest_generator, steps=50)
#print('ytest acc:', ytest_acc)

predict_dir = 'messy_vs_clean_room/predict'
ytest_generator = ytest_datagen.flow_from_directory(
ytest_dir,
target_size=(150, 150),
batch_size=1,
class_mode='binary')
test_loss, test_acc = VGG16_model.evaluate_generator(ytest_generator, steps=50)
print('test acc:', test_acc)
ls /content/messy_vs_clean_room/ytest
from PIL import Image
import numpy as np
from skimage import transform
def load(filename):
np_image = Image.open(filename)
np_image = np.array(np_image).astype('float32')/255
np_image = transform.resize(np_image, (150, 150, 3))
np_image = np.expand_dims(np_image, axis=0)
return np_image
image = load('/content/messy_vs_clean_room/ytest/Messy/Antes-y-después-de-cuartos-sucios-8.jpg')
VGG16_model.predict_classes(image)
import numpy as np
predictions = VGG16_model.predict_generator(ytest_generator)
y_pred = np.array([np.argmax(x) for x in predictions])
y_pred.astype('int32')
# Note that the validation data should not be augmented!
predict_datagen = ImageDataGenerator(rescale=1./255)
predict_generator = predict_datagen.flow_from_directory(
predict_dir,
target_size=(150, 150),
batch_size=2,
class_mode='binary')
predict_loss, predict_acc = VGG16_model.evaluate_generator(predict_generator, steps=1)
print('predict acc:', predict_acc)
predictions = VGG16_model.predict_generator(predict_generator)
predictions
###Output
Found 2 images belonging to 2 classes.
predict acc: 1.0
###Markdown
###Code
import keras
keras.__version__
import tensorflow as tf
print("GPU Available: ", tf.config.list_physical_devices('GPU'))
###Output
GPU Available: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
We download the data
###Code
FILEID='17BB2Ufj-9rTnT9cwZR8fu_sKKGdCXLxM'
FILENAME='train.zip'
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=$FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt
ls -lh
!unzip -q train.zip -d kaggle_original_data
!rm -r kaggle_original_data/__MACOSX/
!ls -l kaggle_original_data | head
###Output
total 8
drwxr-xr-x 2 root root 4096 Jul 22 20:17 clean
drwxr-xr-x 2 root root 4096 Jul 22 20:17 messy
###Markdown
We now are going to sort the images by separating them into different folders.
###Code
from random import shuffle
import os, shutil
# list all labels
label_dirs = ['clean', 'messy']
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/content/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/content/messy_vs_clean_room'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training/validation/test label pictures
for target_dir in [train_dir, validation_dir, test_dir]:
for label in label_dirs:
dir = os.path.join(target_dir, label)
os.mkdir(dir)
# Copy 70% of each label to train, 15% to valid, and 15% to test directories
for label in label_dirs:
fnames = os.listdir(os.path.join(original_dataset_dir, label))
shuffle(fnames) # shuffling the list
n_img_start_valid = int(len(fnames)*0.7)
n_img_start_test = int(len(fnames)*0.85)
for fname in fnames[:n_img_start_valid]: # train
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(train_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_valid:n_img_start_test]: # valid
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(validation_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_test:]: # test
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(test_dir, label, fname)
shutil.copyfile(src, dst)
###Output
_____no_output_____
###Markdown
We check how many images each(train/vali/test) has.
###Code
total_train_imgs = 0
total_valid_imgs = 0
for label in label_dirs:
print('total images for label', label, 'in training:', len(os.listdir(os.path.join(train_dir, label))),
'in valid:', len(os.listdir(os.path.join(validation_dir, label))),
'in test:', len(os.listdir(os.path.join(test_dir, label))))
total_train_imgs += len(os.listdir(os.path.join(train_dir, label)))
total_valid_imgs += len(os.listdir(os.path.join(validation_dir, label)))
print('Total number of training images:', total_train_imgs)
print('Total number of validation images:', total_valid_imgs)
###Output
Total number of training images: 148
Total number of validation images: 32
###Markdown
We build our model by keras with 4 Conv2D + MaxPooling2D layers and 2 dense layers.
###Code
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
We compile loss function, optimizer, and metric into the model.
###Code
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
###Output
_____no_output_____
###Markdown
We change the image into the range [0,1] by ImageDataGenerator. Then we produce the training, testing, validation data.
###Code
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
###Output
data batch shape: (20, 150, 150, 3)
labels batch shape: (20,)
###Markdown
Now we begin to train our model.
###Code
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_1.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
_____no_output_____
###Markdown
The result shows that the model has overfitting problems, so we use data augmentation to expand our dataset.
###Code
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
###Output
_____no_output_____
###Markdown
We train the second model again, but we use dropout this time to improve the validation rate.
###Code
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_aug.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We remove two layers this time.
###Code
# remove 2 layers
smaller_model = models.Sequential()
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
#smaller_model.add(layers.Conv2D(128, (3, 3), activation='relu'))
#smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Flatten())
smaller_model.add(layers.Dropout(0.5))
smaller_model.add(layers.Dense(64, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
smaller_model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = smaller_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
smaller_model.save('clean_and_messy_smaller.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = smaller_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.78125
###Markdown
We use VGG16 model this time to improve the performance of our model.
###Code
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
conv_base.summary()
from keras import models
from keras import layers
VGG16_model = models.Sequential()
VGG16_model.add(conv_base)
VGG16_model.add(layers.Flatten())
VGG16_model.add(layers.Dense(256, activation='relu'))
VGG16_model.add(layers.Dense(1, activation='sigmoid'))
VGG16_model.summary()
print('This is the number of trainable weights '
'before freezing the conv base:', len(VGG16_model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights '
'after freezing the conv base:', len(VGG16_model.trainable_weights))
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = VGG16_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
VGG16_model.save('clean_and_messy_VGG16.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.96875
###Markdown
---
###Code
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
from keras import models
from keras import layers
VGG16_FT_model = models.Sequential()
VGG16_FT_model.add(conv_base)
VGG16_FT_model.add(layers.Flatten())
VGG16_FT_model.add(layers.Dense(256, activation='relu'))
VGG16_FT_model.add(layers.Dense(1, activation='sigmoid'))
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_FT_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = VGG16_FT_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
VGG16_FT_model.save('clean_and_messy_VGG16_FT.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
def smooth_curve(points, factor=0.8):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
plt.plot(epochs,
smooth_curve(acc), 'bo', label='Smoothed training acc')
plt.plot(epochs,
smooth_curve(val_acc), 'b', label='Smoothed validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,
smooth_curve(loss), 'bo', label='Smoothed training loss')
plt.plot(epochs,
smooth_curve(val_loss), 'b', label='Smoothed validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_FT_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
from google.colab import drive
drive.mount('/gdrive')
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
ytest_dir = '/content/messy_vs_clean_room/ytest'
from keras.preprocessing.image import ImageDataGenerator
ytest_datagen = ImageDataGenerator(rescale=1./255)
ytest_generator = test_datagen.flow_from_directory(
ytest_dir,
target_size=(150, 150),
batch_size=1,
class_mode='binary')
pred = VGG16_model.predict_generator(ytest_generator, verbose=1)
predicted_class_indices = np.argmax(pred, axis=1)
labels = (train_generator.class_indices)
label = dict((v,k) for k,v in labels.items())
# 建立代码标签与真实标签的关系
predictions = [label[i] for i in predicted_class_indices]
predictions
#VGG16_model.predict_classes(ytest_generator, batch_size=len(ytest_generator), verbose=0)
#ytest_loss, ytest_acc = VGG16_model.evaluate_generator(ytest_generator, steps=50)
#print('ytest acc:', ytest_acc)
![picture]()
###Output
_____no_output_____
###Markdown
###Code
import keras
keras.__version__
import tensorflow as tf
print("GPU Available: ", tf.config.list_physical_devices('GPU'))
###Output
GPU Available: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
We download the data
###Code
FILEID='17BB2Ufj-9rTnT9cwZR8fu_sKKGdCXLxM'
FILENAME='train.zip'
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=$FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt
ls -lh
!unzip -q train.zip -d kaggle_original_data
!rm -r kaggle_original_data/__MACOSX/
!ls -l kaggle_original_data | head
###Output
total 8
drwxr-xr-x 2 root root 4096 Jul 22 20:17 clean
drwxr-xr-x 2 root root 4096 Jul 22 20:17 messy
###Markdown
We now are going to sort the images by separating them into different folders.
###Code
from random import shuffle
import os, shutil
# list all labels
label_dirs = ['clean', 'messy']
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/content/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/content/messy_vs_clean_room'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training/validation/test label pictures
for target_dir in [train_dir, validation_dir, test_dir]:
for label in label_dirs:
dir = os.path.join(target_dir, label)
os.mkdir(dir)
# Copy 70% of each label to train, 15% to valid, and 15% to test directories
for label in label_dirs:
fnames = os.listdir(os.path.join(original_dataset_dir, label))
shuffle(fnames) # shuffling the list
n_img_start_valid = int(len(fnames)*0.7)
n_img_start_test = int(len(fnames)*0.85)
for fname in fnames[:n_img_start_valid]: # train
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(train_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_valid:n_img_start_test]: # valid
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(validation_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_test:]: # test
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(test_dir, label, fname)
shutil.copyfile(src, dst)
###Output
_____no_output_____
###Markdown
As a sanity check, let's count how many pictures we have in each training split (train / validation / test):
###Code
total_train_imgs = 0
total_valid_imgs = 0
for label in label_dirs:
print('total images for label', label, 'in training:', len(os.listdir(os.path.join(train_dir, label))),
'in valid:', len(os.listdir(os.path.join(validation_dir, label))),
'in test:', len(os.listdir(os.path.join(test_dir, label))))
total_train_imgs += len(os.listdir(os.path.join(train_dir, label)))
total_valid_imgs += len(os.listdir(os.path.join(validation_dir, label)))
print('Total number of training images:', total_train_imgs)
print('Total number of validation images:', total_valid_imgs)
###Output
Total number of training images: 148
Total number of validation images: 32
###Markdown
Building our networkWe've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same general structure: our convnet will be a stack of alternated `Conv2D` (with `relu` activation) and `MaxPooling2D` layers.However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one more `Conv2D` + `MaxPooling2D` stage. This serves both to augment the capacity of the network, and to further reduce the size of the feature maps, so that they aren't overly large when we reach the `Flatten` layer. Here, since we start from inputs of size 150x150 (a somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the `Flatten` layer.Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.Since we are attacking a binary classification problem, we are ending the network with a single unit (a `Dense` layer of size 1) and a `sigmoid` activation. This unit will encode the probability that the network is looking at one class or the other. Original Model (4 Conv2D + MaxPooling2D layers)
###Code
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_1.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
###Markdown
---
###Code
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('clean_and_messy_aug.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
---
###Code
# remove 2 layers
smaller_model = models.Sequential()
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Conv2D(32, (3, 3), activation='relu'))
smaller_model.add(layers.MaxPooling2D((2, 2)))
#smaller_model.add(layers.Conv2D(128, (3, 3), activation='relu'))
#smaller_model.add(layers.MaxPooling2D((2, 2)))
smaller_model.add(layers.Flatten())
smaller_model.add(layers.Dropout(0.5))
smaller_model.add(layers.Dense(64, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
smaller_model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = smaller_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
smaller_model.save('clean_and_messy_smaller.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = smaller_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.78125
###Markdown
---
###Code
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
conv_base.summary()
from keras import models
from keras import layers
VGG16_model = models.Sequential()
VGG16_model.add(conv_base)
VGG16_model.add(layers.Flatten())
VGG16_model.add(layers.Dense(256, activation='relu'))
VGG16_model.add(layers.Dense(1, activation='sigmoid'))
VGG16_model.summary()
print('This is the number of trainable weights '
'before freezing the conv base:', len(VGG16_model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights '
'after freezing the conv base:', len(VGG16_model.trainable_weights))
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = VGG16_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
VGG16_model.save('clean_and_messy_VGG16.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 32 images belonging to 2 classes.
test acc: 0.96875
###Markdown
---
###Code
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
from keras import models
from keras import layers
VGG16_FT_model = models.Sequential()
VGG16_FT_model.add(conv_base)
VGG16_FT_model.add(layers.Flatten())
VGG16_FT_model.add(layers.Dense(256, activation='relu'))
VGG16_FT_model.add(layers.Dense(1, activation='sigmoid'))
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
VGG16_FT_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = VGG16_FT_model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
VGG16_FT_model.save('clean_and_messy_VGG16_FT.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
def smooth_curve(points, factor=0.8):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
plt.plot(epochs,
smooth_curve(acc), 'bo', label='Smoothed training acc')
plt.plot(epochs,
smooth_curve(val_acc), 'b', label='Smoothed validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,
smooth_curve(loss), 'bo', label='Smoothed training loss')
plt.plot(epochs,
smooth_curve(val_loss), 'b', label='Smoothed validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = VGG16_FT_model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
import numpy as np
ytest_dir = '/content/messy_vs_clean_room/ytest'
from keras.preprocessing.image import ImageDataGenerator
ytest_datagen = ImageDataGenerator(rescale=1./255)
ytest_generator = ytest_datagen.flow_from_directory(
ytest_dir,
target_size=(150, 150),
batch_size=1,
class_mode='binary')
pred = VGG16_model.predict_generator(ytest_generator, verbose=1)
predicted_class_indices = np.argmax(pred, axis=1)
labels = (train_generator.class_indices)
label = dict((v,k) for k,v in labels.items())
# 建立代码标签与真实标签的关系
predictions = [label[i] for i in predicted_class_indices]
predictions
#VGG16_model.predict_classes(ytest_generator, batch_size=len(ytest_generator), verbose=0)
#ytest_loss, ytest_acc = VGG16_model.evaluate_generator(ytest_generator, steps=50)
#print('ytest acc:', ytest_acc)

predict_dir = 'messy_vs_clean_room/predict'
ytest_generator = ytest_datagen.flow_from_directory(
ytest_dir,
target_size=(150, 150),
batch_size=1,
class_mode='binary')
test_loss, test_acc = VGG16_model.evaluate_generator(ytest_generator, steps=50)
print('test acc:', test_acc)
ls /content/messy_vs_clean_room/ytest
from PIL import Image
import numpy as np
from skimage import transform
def load(filename):
np_image = Image.open(filename)
np_image = np.array(np_image).astype('float32')/255
np_image = transform.resize(np_image, (150, 150, 3))
np_image = np.expand_dims(np_image, axis=0)
return np_image
image = load('/content/messy_vs_clean_room/ytest/Messy/Antes-y-después-de-cuartos-sucios-8.jpg')
VGG16_model.predict_classes(image)
import numpy as np
predictions = VGG16_model.predict_generator(ytest_generator)
y_pred = np.array([np.argmax(x) for x in predictions])
y_pred.astype('int32')
# Note that the validation data should not be augmented!
predict_datagen = ImageDataGenerator(rescale=1./255)
predict_generator = predict_datagen.flow_from_directory(
predict_dir,
target_size=(150, 150),
batch_size=2,
class_mode='binary')
predict_loss, predict_acc = VGG16_model.evaluate_generator(predict_generator, steps=1)
print('predict acc:', predict_acc)
predictions = VGG16_model.predict_generator(predict_generator)
predictions
###Output
Found 2 images belonging to 2 classes.
predict acc: 1.0
|
notebooks/run_analysis.ipynb | ###Markdown
What's in a box score?This study spurred out of looking at a box score for a particular game with many hits but few runs. It got me thinking "what should my expectation on number of runs be based on the hits?" Of course the two are related but there's a _lot_ of other information that goes into the number of runs, so I wanted to investigate how well I could predict the score based on this limited information.The original analysis is shown first for posterity (performed mid-2019), however I later decided to come back and change the approach after rethinking this project.
###Code
from pybaseball import retrosheet
import pandas as pd
pd.options.display.max_columns=999
import matplotlib.pyplot as plt
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams["figure.edgecolor"]= "white"
import seaborn as sns
import numpy as np
%matplotlib inline
from sklearn.metrics import mean_squared_error,r2_score
import pymc3 as pm
import arviz as az
SHADE_COLOR="#5626C4"
SHADE_ALPHA=0.15
SEED=4693 # happy birthday to me
###Output
_____no_output_____
###Markdown
Get data - Retrosheet
###Code
logs = retrosheet.season_game_logs(2018)
hits = logs["visiting_hits"].append(logs["home_hits"])
runs = logs["visiting_score"].append(logs["home_score"])
axes = sns.jointplot(x=hits, y=runs, kind="scatter",xlim=(-0.5,max(hits+0.5)), ylim=(-0.5,max(runs+0.5)))
axes.plot_joint(sns.kdeplot, color="tab:red", zorder=0, levels=6)
plt.sca(axes.ax_joint)
plt.xlabel("Hits", fontsize=14)
plt.ylabel("Runs", fontsize=14)
plt.tick_params(labelsize=12)
plt.savefig("../plots/runs_v_hits/raw_data", facecolor="white", bbox_inches="tight")
###Output
_____no_output_____
###Markdown
---- Updated Analysis (Nov 2020)A better formulated answer to this question is to approach it from a Bayesian mindset. What I was originally asking was "conditional on the number of hits, what's the likely number of runs?" We can do this using Bayesian linear regression, and get proper uncertainty estimates, which is very relevant to this question. Start by building a simple linear model. Priors:- I assume that the y-intercept will be less than 0, since it's very rare to runs without hits (just on many BB/HBP events), but you very often get hits without runs.- The slope ought to be positive but less than 1.- The error term I'll set very wide, using a half-normal distribution
###Code
bayes_lr = pm.Model()
with bayes_lr:
# Priors
α = pm.Normal("α", mu=-0.5, sd=1)
β = pm.Normal("β", mu=0.5, sd=0.75)
ϵ = pm.HalfNormal("ϵ", sigma=2.5)
# Linear Regression
μ = pm.Deterministic("μ", α + β * hits)
# Outcome
outcome = pm.Normal("Runs", mu=μ, sd=ϵ, observed=runs)
# Sample from model
trace = pm.sample(3000, tune=2000, chains=2, cores=2, return_inferencedata=True, random_seed=SEED)
prior = pm.sample_prior_predictive(200)
trace.extend(az.from_pymc3(prior=prior))
ppc = pm.sample_posterior_predictive(trace, samples=6000)
trace.extend(az.from_pymc3(posterior_predictive=ppc))
pm.model_to_graphviz(bayes_lr)
az.plot_trace(trace, var_names=["α", "β", "ϵ"])
###Output
_____no_output_____
###Markdown
The model seems to fit well, so let's look at how it looks on the output data
###Code
fig = plt.figure(figsize=(8,8))
ax =plt.gca()
az.plot_hdi(hits, ppc["Runs"], hdi_prob=0.68, color=SHADE_COLOR, ax=ax, fill_kwargs={"alpha":SHADE_ALPHA})
az.plot_hdi(hits, ppc["Runs"], hdi_prob=0.95, color=SHADE_COLOR, ax=ax, fill_kwargs={"alpha":SHADE_ALPHA})
# add jitter
x = np.random.normal(hits, 0.05, size=len(hits))
plt.plot(x, runs, '.',color="dodgerblue", alpha=0.1)
plt.ylim(bottom=0)
plt.xlim(left=0,right=26)
plt.tick_params(labelsize=14)
plt.xlabel("Hits", fontsize=14)
plt.ylabel("Runs",fontsize=14);
###Output
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
###Markdown
This is nice, but there's some issues. The uncertainty doesn't cover the outliers, and the residual space appears is systematically biased. Consider a model more robust to outliersOutcome modeled by T-distribution rather than normal in order to make it more robust against outlier events, hopefully widen the 95% CI too.
###Code
robust_bayes_lr = pm.Model()
with robust_bayes_lr:
# Priors
α = pm.Normal("α", mu=-0.5, sd=1)
β = pm.Normal("β", mu=0.5, sd=0.75)
ϵ = pm.HalfNormal("ϵ", sigma=2.5)
ν_ = pm.Exponential("ν_", 1/30)
ν = pm.Deterministic("ν", ν_ + 1)
# Linear Regression
μ = pm.Deterministic("μ", α + β * hits)
# Outcome
outcome = pm.StudentT("Runs", mu=μ, sd=ϵ, nu=ν, observed=runs)
# Sample from model
trace_robust = pm.sample(3000, tune=1500, chains=2, cores=2, return_inferencedata=True, random_seed=SEED)
prior_robust = pm.sample_prior_predictive(200)
trace_robust.extend(az.from_pymc3(prior=prior_robust))
ppc_robust = pm.sample_posterior_predictive(trace_robust, samples=6000)
trace_robust.extend(az.from_pymc3(posterior_predictive=ppc_robust))
az.plot_trace(trace_robust, var_names=["α", "β", "ϵ","ν"]);
###Output
_____no_output_____
###Markdown
Model comparison
###Code
comp_WAIC = pm.compare({"normal": trace, "studentT": trace_robust})
comp_WAIC
pm.compareplot(comp_WAIC);
###Output
_____no_output_____
###Markdown
StudentT Model appears to be slightly better, but within uncertainty bands so can't say for certain
###Code
fig = plt.figure(figsize=(8,8))
ax =plt.gca()
az.plot_hdi(hits, ppc_robust["Runs"], hdi_prob=0.68, color=SHADE_COLOR, ax=ax, fill_kwargs={"alpha":SHADE_ALPHA})
az.plot_hdi(hits, ppc_robust["Runs"], hdi_prob=0.95, color=SHADE_COLOR, ax=ax, fill_kwargs={"alpha":SHADE_ALPHA})
# add jitter
x = np.random.normal(hits, 0.05, size=len(hits))
plt.scatter(x, runs, marker=".", color='dodgerblue', alpha=0.1)
plt.ylim(bottom=0)
plt.xlim(left=0);
###Output
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
###Markdown
Looks... about the same. What if we try to include the non-constant variance (heteroscedasticity) in the model?
###Code
variance_model = pm.Model()
with variance_model:
# Linear Regression for mean
α = pm.Normal("α", mu=-0.5, sd=1)
β = pm.Normal("β", mu=0.5, sd=1)
μ = pm.Deterministic("μ", α + β * hits)
# Linear Regression for deviation
σ_m = pm.Normal("σ_m", mu=0, sd=10)
σ_b = pm.Normal("σ_b", mu=0, sd=10)
σ = pm.Deterministic(
"σ",
1 + pm.math.exp(σ_m * hits+ σ_b)
)
# Outcome
outcome = pm.Normal("Runs", mu=μ, sd=σ, observed=runs)
# Sample from model
trace_var = pm.sample(3000, chains=2, cores=2, return_inferencedata=True, random_seed=4693)
prior_var = pm.sample_prior_predictive(200)
trace_var.extend(az.from_pymc3(prior=prior_var))
ppc_var = pm.sample_posterior_predictive(trace_var, samples=6000)
trace_var.extend(az.from_pymc3(posterior_predictive=ppc_var))
pm.model_to_graphviz(variance_model)
az.plot_trace(trace_var, var_names=["α", "β", "σ_m", "σ_b"])
fig = plt.figure(figsize=(8,8))
ax =plt.gca()
az.plot_hdi(hits, ppc_var["Runs"], hdi_prob=0.68, color=SHADE_COLOR, ax=ax,
fill_kwargs={"alpha": SHADE_ALPHA})
az.plot_hdi(hits, ppc_var["Runs"], hdi_prob=0.95, color=SHADE_COLOR, ax=ax,
fill_kwargs={"alpha": SHADE_ALPHA})
x = np.linspace(0,26,1000)
b = trace_var.posterior["α"].mean().item()
m = trace_var.posterior["β"].mean().item()
y = b + m * x
plt.plot(x,y, color="k", alpha=0.7, linestyle="--",
label=f"$y={round(m,3)}x - {abs(round(b,3))}$")
# add jitter
x = np.random.normal(hits, 0.05, size=len(hits))
plt.scatter(x, runs, marker=".", color='dodgerblue', alpha=0.1)
plt.ylim(bottom=0,top=23)
plt.xlim(left=0, right=20)
plt.legend(frameon=False, fontsize=16, loc="upper left")
plt.tick_params(labelsize=14)
plt.xlabel("Hits", fontsize=16)
plt.ylabel("Runs",fontsize=16)
plt.xticks(np.arange(0,20,2))
plt.yticks(np.arange(0,22,2))
plt.savefig("../plots/runs_v_hits/runs_v_hits_heteroscedasticity", facecolor="white", bbox_inches="tight")
###Output
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
/Users/tburch/Documents/github/hierarchical-home-field/venv_pymc3/lib/python3.8/site-packages/arviz/stats/stats.py:484: FutureWarning: hdi currently interprets 2d data as (draw, shape) but this will change in a future release to (chain, draw) for coherence with other functions
warnings.warn(
###Markdown
This model appears to describe the data the best, encapsulates the growing variance w.r.t. number of hits.
###Code
comp_WAIC = pm.compare({"Standard": trace, "studentT": trace_robust, "Variance Adjusted":trace_var})
comp_WAIC
pm.compareplot(comp_WAIC, figsize=(10,3));
###Output
_____no_output_____
###Markdown
The model with non-constant variance clearly does better in terms of Leave-one-out CV, appears to be the best model choice.
###Code
az.plot_forest([trace, trace_robust, trace_var], model_names=["Normal", "StudentT","Variance Adusted"], var_names=["α","β"], combined=True, figsize=(10,3));
###Output
_____no_output_____
###Markdown
Here we can also see that using a model that accounts for heteroscedasticity maks the intercept change from about -1.5 to -1.0 and reduces the slope a bit too ----The original analysis which this develops upon is below. Generate plot using mean eastimatorsOriginally decided to use a polynomial function here because it better fit the data. In retrospect, his is not well physically motivated, there's no a priori reason why this might scale as the square of runs.
###Code
# Helper function to get points from an axis
def get_points(ax):
lines = ax.lines
x = [l.get_xdata().mean() for l in lines]
y = [l.get_ydata().mean() for l in lines]
y_std = [l.get_ydata().std() for l in lines]
return x[:-1], y[:-1], y_std[:-1] # For some reason this gets the entire mean as well, so drop those
ax = sns.regplot(x=hits, y=runs, x_estimator=np.mean, order=2, x_ci="sd")#, label="y=$%.2fx^{2}+%.2fx+{%.2f}$"%(c2,c1,c0))
t_x, t_y, t_e = get_points(ax)
c0, c1, c2 = np.polyfit(x=t_x,y=t_y,deg=2)
c4, c5 = np.polyfit(x=t_x,y=t_y,deg=1)
# Get MSE and r2
exponential_y = [c0*x**2 + c1*x + c2 for x in t_x]
exponential_mse = mean_squared_error(t_y, exponential_y)
#exponential_r2 = r2_score(t_y, exponential_y)
linear_y = [c4*x+c5 for x in t_x]
linear_mse = mean_squared_error(t_y, linear_y)
plt.close()
# Plot
fig = plt.figure(figsize=(8,6))
x = np.linspace(0,26,260)
y = c0*x**2 + c1*x + c2
plt.plot(x,y, 'g-', label="y=$%.2fx^{2}+%.2fx+{%.2f}$\n MSE = %.2f\n Standard deviation uncertainty shown"%(c0,c1,c2,exponential_mse))
plt.errorbar(t_x, t_y, yerr=t_e, linestyle="None", marker="o")
plt.xlabel("Hits", fontsize=18)
plt.ylabel("Runs", fontsize=18)
plt.gca().tick_params(axis='both', which='major', labelsize=14)
plt.xlim(left=0)
plt.ylim(bottom=0)
plt.legend(frameon=False, fontsize=14)
plt.annotate("2018 Data", xy=(0.98,0.02), xycoords="axes fraction", ha="right",fontsize=18)
plt.tight_layout()
plt.savefig('../plots/runs_v_hits')
###Output
_____no_output_____
###Markdown
Check Correlation valueAdditionally I made a plot looking at correlation, and plotted a simple linear regression using seaborn.
###Code
fig = plt.figure(figsize=(8,6))
sns.regplot(x=hits, y=runs,x_jitter=.1, marker='.')
corr = np.corrcoef(hits,runs)[0][1]
plt.xlabel("Hits", fontsize=18)
plt.ylabel("Runs", fontsize=18)
plt.gca().tick_params(axis='both', which='major', labelsize=14)
plt.xlim(left=0)
plt.ylim(bottom=0)
plt.annotate("2018 Data", xy=(0.02,0.94), xycoords="axes fraction", ha="left",fontsize=18)
plt.annotate("Pearson Corrleation = %.3f"%corr, xy=(0.02,0.88), xycoords="axes fraction", ha="left",fontsize=18)
plt.tight_layout()
plt.savefig('../plots/runs_v_hits_linregression')
###Output
_____no_output_____ |
homeworks/D098/Day098_Python_generator.ipynb | ###Markdown
Generator 可以使用 next 來進行循環中的一步文字上有點難解釋,直接來看範例就能了解什麼是 Generator! 撰寫一個 Generator,一次吐出 list 中的一個值
###Code
def output_from_list_generator(your_list):
for i in your_list:
yield i
my_list = [1, 2, 3, 4, 5]
gen = output_from_list_generator(my_list)
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
###Output
_____no_output_____
###Markdown
從上面的範例程式碼我們可以看到,當使用一次 next,generator 就會跑 for_loop 一次,因此得到 list 中的第一個值,當再使用一次後,for_loop 記得上次的循環,所以吐出第二個值。最後一次,因為 for loop 已經執行結束了,所以再使用 next 就會看到 StopIteration,無法在得到值 我們可以撰寫一個無限循環的 Generator,只要使用 While True 即可
###Code
def inf_loop_generator(your_list):
while True:
for i in your_list:
yield i
gen = inf_loop_generator(my_list)
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
print(next(gen))
###Output
2
###Markdown
上面的程式碼因為我們使用了 While True,所以 for loop 不會結束,只要 call next 就一定會跑一次循環,並返回值 雖然 Cifar-10 的資料可以全部讀進記憶體,但讓我們試著用 Generator,批次的把 Cifar 10 的資料取出來,一次取 32 張出來!
###Code
def img_combine(img, ncols=8, size=1, path=False):
from math import ceil
import matplotlib.pyplot as plt
import numpy as np
nimg = len(img)
nrows = int(ceil(nimg/ncols))
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, sharex=True, sharey=True, figsize=(ncols*size,nrows*size))
if nrows == 0:
return
elif ncols == 1:
for r, ax in zip(np.arange(nrows), axes):
nth=r
if nth < nimg:
ax.imshow(img[nth], cmap='rainbow', vmin=0, vmax=1)
ax.set_axis_off()
elif nrows == 1:
for c, ax in zip(np.arange(ncols), axes):
nth=c
if nth < nimg:
ax.imshow(img[nth], cmap='rainbow', vmin=0, vmax=1)
ax.set_axis_off()
else:
for r, row in zip(np.arange(nrows), axes):
for c, ax in zip(np.arange(ncols), row):
nth=r*ncols+c
if nth < nimg:
ax.imshow(img[nth], cmap='rainbow', vmin=0, vmax=1)
ax.set_axis_off()
plt.show()
from keras.datasets import cifar10
(x_train, x_test), (y_train, y_test) = cifar10.load_data()
def cifar_generator(image_array, batch_size=32):
while True:
for indexs in range(0, len(image_array), batch_size):
images = x_train[indexs: indexs+batch_size]
labels = y_train[indexs: indexs+batch_size]
yield (images, labels)
cifar_gen = cifar_generator(x_train)
images, labels = next(cifar_gen)
print(images.shape, labels.shape)
img_combine(images)
images, labels = next(cifar_gen)
img_combine(images)
###Output
_____no_output_____
###Markdown
可以看到兩次的圖片並不一樣,這樣就可以開始訓練囉! 作業 請參考昨天的程式碼,將訓練資料讀取方式改寫成 Generator,並將原本的 model.fit 改為 model.fit_generator 來進行訓練。請參考 Keras [官方文件中 fit_generator 的說明](https://keras.io/models/sequential/)
###Code
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.optimizers import RMSprop, Adam
import os
batch_size = 128 # batch 的大小,如果出現 OOM error,請降低這個值
num_classes = 10 # 類別的數量,Cifar 10 共有 10 個類別
epochs = 10 # 訓練的 epochs 數量
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
history = model.fit_generator(cifar_gen, steps_per_epoch=50000,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
_____no_output_____ |
uncertainty_traps/uncertainty_traps_solutions_py.ipynb | ###Markdown
quant-econ Solutions: Uncertainty Traps Solutions for http://quant-econ.net/py/uncertainty_traps.html
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import itertools
from uncertainty_traps import UncertaintyTrapEcon
###Output
:0: FutureWarning: IPython widgets are experimental and may change in the future.
###Markdown
Exercise 1 This exercise asked you to validate the laws of motion for $\gamma$ and $\mu$ given in the lecture, based on the stated result about Bayesian updating in a scalar Gaussian setting. The stated result tells us that after observing average output $X$ of the $M$ firms, our posterior beliefs will be$$ N(\mu_0, 1/\gamma_0)$$where$$ \mu_0 = \frac{\mu \gamma + M X \gamma_x}{\gamma + M \gamma_x} \quad \text{and} \quad \gamma_0 = \gamma + M \gamma_x$$If we take a random variable $\theta$ with this distribution and then evaluate the distribution of $\rho \theta + \sigma_\theta w$ where $w$ is independent and standard normal, we get the expressions for $\mu'$ and $\gamma'$ given in the lecture. Exercise 2 First let's replicate the plot that illustrates the law of motion for precision, which is$$ \gamma_{t+1} = \left( \frac{\rho^2}{\gamma_t + M \gamma_x} + \sigma_\theta^2 \right)^{-1}$$ Here $M$ is the number of active firms. The next figure plots $\gamma_{t+1}$ against $\gamma_t$ on a 45 degree diagram for different values of $M$
###Code
palette = itertools.cycle(sns.color_palette())
econ = UncertaintyTrapEcon()
rho, sig_theta, gx = econ.rho, econ.sig_theta, econ.gx # simplify names
g = np.linspace(1e-10, 3, 200) # gamma grid
fig, ax = plt.subplots(figsize=(9, 9))
ax.plot(g, g, 'k-') # 45 degree line
for M in range(7):
g_next = 1 / (rho**2 / (g + M * gx) + sig_theta**2)
label_string = r"$M = {}$".format(M)
ax.plot(g, g_next, lw=2, label=label_string, color=next(palette))
ax.legend(loc='lower right', fontsize=14)
ax.set_xlabel(r'$\gamma$', fontsize=16)
ax.set_ylabel(r"$\gamma'$", fontsize=16)
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The points where the curves hit the 45 degree lines are the long run steady states corresponding to each $M$, if that value of $M$ was to remain fixed. As the number of firms falls, so does the long run steady state of precision. Next let's generate time series for beliefs and the aggregates -- that is, the numberof active firms and average output.
###Code
sim_length=2000
mu_vec = np.empty(sim_length)
theta_vec = np.empty(sim_length)
gamma_vec = np.empty(sim_length)
X_vec = np.empty(sim_length)
M_vec = np.empty(sim_length)
mu_vec[0] = econ.mu
gamma_vec[0] = econ.gamma
theta_vec[0] = 0
w_shocks = np.random.randn(sim_length)
for t in range(sim_length-1):
X, M = econ.gen_aggregates()
X_vec[t] = X
M_vec[t] = M
econ.update_beliefs(X, M)
econ.update_theta(w_shocks[t])
mu_vec[t+1] = econ.mu
gamma_vec[t+1] = econ.gamma
theta_vec[t+1] = econ.theta
# Record final values of aggregates
X, M = econ.gen_aggregates()
X_vec[-1] = X
M_vec[-1] = M
###Output
_____no_output_____
###Markdown
First let's see how well $\mu$ tracks $\theta$ in these simulations
###Code
fig, ax = plt.subplots(figsize=(9, 6))
ax.plot(range(sim_length), theta_vec, alpha=0.6, lw=2, label=r"$\theta$")
ax.plot(range(sim_length), mu_vec, alpha=0.6, lw=2, label=r"$\mu$")
ax.legend(fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's plot the whole thing together
###Code
fig, axes = plt.subplots(4, 1, figsize=(12, 20))
# Add some spacing
fig.subplots_adjust(hspace=0.3)
series = (theta_vec, mu_vec, gamma_vec, M_vec)
names = r'$\theta$', r'$\mu$', r'$\gamma$', r'$M$'
for ax, vals, name in zip(axes, series, names):
# determine suitable y limits
s_max, s_min = max(vals), min(vals)
s_range = s_max - s_min
y_max = s_max + s_range * 0.1
y_min = s_min - s_range * 0.1
ax.set_ylim(y_min, y_max)
# Plot series
ax.plot(range(sim_length), vals, alpha=0.6, lw=2)
ax.set_title("time series for {}".format(name), fontsize=16)
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
quant-econ Solutions: Uncertainty Traps Solutions for http://quant-econ.net/py/uncertainty_traps.html
###Code
%matplotlib inline
from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import itertools
from uncertainty_traps import UncertaintyTrapEcon
###Output
/home/matthewmckay/anaconda/lib/python3.5/site-packages/matplotlib/__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
###Markdown
Exercise 1 This exercise asked you to validate the laws of motion for $\gamma$ and $\mu$ given in the lecture, based on the stated result about Bayesian updating in a scalar Gaussian setting. The stated result tells us that after observing average output $X$ of the $M$ firms, our posterior beliefs will be$$ N(\mu_0, 1/\gamma_0)$$where$$ \mu_0 = \frac{\mu \gamma + M X \gamma_x}{\gamma + M \gamma_x} \quad \text{and} \quad \gamma_0 = \gamma + M \gamma_x$$If we take a random variable $\theta$ with this distribution and then evaluate the distribution of $\rho \theta + \sigma_\theta w$ where $w$ is independent and standard normal, we get the expressions for $\mu'$ and $\gamma'$ given in the lecture. Exercise 2 First let's replicate the plot that illustrates the law of motion for precision, which is$$ \gamma_{t+1} = \left( \frac{\rho^2}{\gamma_t + M \gamma_x} + \sigma_\theta^2 \right)^{-1}$$ Here $M$ is the number of active firms. The next figure plots $\gamma_{t+1}$ against $\gamma_t$ on a 45 degree diagram for different values of $M$
###Code
palette = itertools.cycle(sns.color_palette())
econ = UncertaintyTrapEcon()
rho, sig_theta, gx = econ.rho, econ.sig_theta, econ.gx # simplify names
g = np.linspace(1e-10, 3, 200) # gamma grid
fig, ax = plt.subplots(figsize=(9, 9))
ax.plot(g, g, 'k-') # 45 degree line
for M in range(7):
g_next = 1 / (rho**2 / (g + M * gx) + sig_theta**2)
label_string = r"$M = {}$".format(M)
ax.plot(g, g_next, lw=2, label=label_string, color=next(palette))
ax.legend(loc='lower right', fontsize=14)
ax.set_xlabel(r'$\gamma$', fontsize=16)
ax.set_ylabel(r"$\gamma'$", fontsize=16)
ax.grid()
plt.show()
###Output
/home/matthewmckay/anaconda/lib/python3.5/site-packages/matplotlib/__init__.py:892: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
###Markdown
The points where the curves hit the 45 degree lines are the long run steady states corresponding to each $M$, if that value of $M$ was to remain fixed. As the number of firms falls, so does the long run steady state of precision. Next let's generate time series for beliefs and the aggregates -- that is, the numberof active firms and average output.
###Code
sim_length=2000
mu_vec = np.empty(sim_length)
theta_vec = np.empty(sim_length)
gamma_vec = np.empty(sim_length)
X_vec = np.empty(sim_length)
M_vec = np.empty(sim_length)
mu_vec[0] = econ.mu
gamma_vec[0] = econ.gamma
theta_vec[0] = 0
w_shocks = np.random.randn(sim_length)
for t in range(sim_length-1):
X, M = econ.gen_aggregates()
X_vec[t] = X
M_vec[t] = M
econ.update_beliefs(X, M)
econ.update_theta(w_shocks[t])
mu_vec[t+1] = econ.mu
gamma_vec[t+1] = econ.gamma
theta_vec[t+1] = econ.theta
# Record final values of aggregates
X, M = econ.gen_aggregates()
X_vec[-1] = X
M_vec[-1] = M
###Output
_____no_output_____
###Markdown
First let's see how well $\mu$ tracks $\theta$ in these simulations
###Code
fig, ax = plt.subplots(figsize=(9, 6))
ax.plot(range(sim_length), theta_vec, alpha=0.6, lw=2, label=r"$\theta$")
ax.plot(range(sim_length), mu_vec, alpha=0.6, lw=2, label=r"$\mu$")
ax.legend(fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's plot the whole thing together
###Code
fig, axes = plt.subplots(4, 1, figsize=(12, 20))
# Add some spacing
fig.subplots_adjust(hspace=0.3)
series = (theta_vec, mu_vec, gamma_vec, M_vec)
names = r'$\theta$', r'$\mu$', r'$\gamma$', r'$M$'
for ax, vals, name in zip(axes, series, names):
# determine suitable y limits
s_max, s_min = max(vals), min(vals)
s_range = s_max - s_min
y_max = s_max + s_range * 0.1
y_min = s_min - s_range * 0.1
ax.set_ylim(y_min, y_max)
# Plot series
ax.plot(range(sim_length), vals, alpha=0.6, lw=2)
ax.set_title("time series for {}".format(name), fontsize=16)
ax.grid()
plt.show()
###Output
_____no_output_____ |
ipynb/bac_genome/OTU-level_variability/.ipynb_checkpoints/p1_NCBI_complete_genome_download-checkpoint.ipynb | ###Markdown
Goal: * Download most up-to-date version of NCBI 'complete' genomes Setting variables
###Code
workDir = '/var/seq_data/ncbi_db/genome/Jan2016/'
proksFile = 'proks_complete.txt'
taxFile = 'proks_complete_tax.txt'
###Output
_____no_output_____
###Markdown
Init
###Code
import os
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(genomes)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
###Output
/var/seq_data/ncbi_db/genome/Jan2016
###Markdown
Loading list of complete prok genomes
###Code
%%R -i workDir -i proksFile
F = file.path(workDir, proksFile)
df.proks.complete = read.delim(F, sep='\t')
# checking join
df.proks.complete %>% nrow %>% print
df.proks.complete %>% head(n=3)
%%R -i workDir -i taxFile
F = file.path(workDir, taxFile)
df.tax = read.delim(F, sep='\t') %>%
distinct(taxid)
df.proks.complete = dplyr::inner_join(df.proks.complete, df.tax, c('taxid' = 'taxid'))
# checking join
df.proks.complete %>% nrow %>% print
df.proks.complete %>% nrow %>% print
df.proks.complete %>% head(n=3)
###Output
_____no_output_____
###Markdown
Just Bacteria
###Code
%%R
df.bac.complete = df.proks.complete %>%
filter(superkingdom == 'Bacteria')
df.bac.complete %>% nrow
###Output
_____no_output_____
###Markdown
Phylum representation
###Code
%%R -w 800
df.bac.complete.s = df.bac.complete %>%
group_by(phylum) %>%
summarize(n = n()) %>%
filter(! is.na(n), n > 0)
ggplot(df.bac.complete.s, aes(phylum, n)) +
geom_bar(stat='identity') +
scale_y_log10() +
labs(y = 'Number of genomes') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=60, hjust=1)
)
###Output
_____no_output_____
###Markdown
removing what are really phage/plasmid genomes
###Code
%%R
cat('Pre-filter:', df.bac.complete %>% nrow, '\n')
to.rm = c("Thermoanaerobacterium saccharolyticum JW/SL-YS485",
"Streptococcus salivarius 57.I")
df.bac.complete = df.bac.complete %>%
filter(! name %in% to.rm)
cat('Post-filter:', df.bac.complete %>% nrow, '\n')
###Output
_____no_output_____
###Markdown
Sequence download
###Code
%%R -i workDir
outFile = file.path(workDir, 'bac_complete.txt')
write.table(df.bac.complete, outFile, sep='\t', quote=FALSE, row.names=FALSE)
!seqDB_tools accession-GI2fasta \
-a 11 -n 2 -f 12 -header -o bac_complete \
< bac_complete.txt \
2> bac_complete.log
%pushnote genome download complete
###Output
_____no_output_____
###Markdown
Getting list of empty genome files
###Code
fileSizes = !ls -tlc *.fna | perl -pe 's/[ \t]+/ /g'
outFile = 'empty_genome_files.txt'
with open(outFile, 'wb') as outFH:
for x in fileSizes:
xx = x.split(' ')
if xx[4] == '0':
xx[-1] = xx[-1].replace('_', ' ').rstrip('.fna')
outFH.write(xx[-1] + '\n')
# status
!printf 'Number of empty genome files: '
!wc -l $outFile
!head $outFile
###Output
Number of empty genome files: 13 empty_genome_files.txt
Proteus mirabilis
Pseudomonas chlororaphis subsp aurantiac
Pseudomonas putida BIRD-1
Pseudothermotoga elfii DSM 9442 NBRC 107921
Pseudomonadaceae bacterium B4199
Pseudomonas syringae pv syringae B728
Pseudomonas stutzeri DSM 4166
Pseudomonas sp CCOS 191
Pseudomonas aeruginosa PA1R
Pusillimonas sp T7-7
###Markdown
Deleting empty files
###Code
fileSizes = !ls -tlc bac_complete/*.fna | perl -pe 's/[ \t]+/ /g'
for x in fileSizes:
xx = x.split(' ')
if float(xx[4]) < 100000.0:
os.remove(xx[-1])
###Output
_____no_output_____
###Markdown
Checking output
###Code
genomeDir = os.path.join(workDir, 'bac_complete')
%cd $genomeDir
# number of genomes downloaded
!printf "Number of bacterial genomes: "
!find . -name "*.fna" | wc -l
# file size
!echo "Genome file size distribution (bytes):"
!ls -tlc *.fna | \
perl -pe 's/ +/\t/g' | \
cut -f 5 | NY_misc_perl stats_descriptive
# checking for non-bacterial genomes
!find . -name "*fna" | xargs -P 20 egrep "phage|virus|phage"
# deleting non-bacterial genomes
!rm -f ./Clostridium_perfringens_SM101.fna \
./Chlamydophila_pneumoniae_AR39.fna \
./Enterococcus_faecalis_62.fna
# number of genomes downloaded
!printf "Number of bacterial genomes: "
!find . -name "*.fna" | wc -l
###Output
_____no_output_____
###Markdown
Renaming genomes
###Code
genomeDirRn = genomeDir + '_rn'
genomeDirRn
# renameing
!find . -name "*.fna" | \
SIPSim genome_rename -n 26 --prefix $genomeDirRn -
###Output
_____no_output_____ |
examples/fOU.ipynb | ###Markdown
A fractional Ornstein−Uhlenbeck (fOU) process A simple Ornstein--Uhlenbeck process takes the form$$\mathrm{d}y(t) = -\theta y(t)\mathrm{d}t + \sigma \mathrm{d}B_H(t), $$with $\theta$ the drift coefficient, $\sigma$ diffusion term, and $B(t)$ a fractional Brownian motion with index $H$. There is a Note at the end of this notebook on fractional Gaussian noise and fractional Brownian motion, if you wish to understand it a bit better. Integrating the fOU with an Euler−Maruyama scheme
###Code
# The total integration time
t_final = 500
# The desired timestep of integration
delta_t = 0.001
# time array of the process
time = np.linspace(0, t_final, t_final * int(1 / delta_t))
# Choose some values for the drift and diffusion
theta = 0.3
sigma = 0.1
# Generate your favourite fractional Gaussian noise with H index
H = 0.7
dB = (t_final ** H) * fgn(N = time.size, H = H)
# Initialise the array y
y = np.zeros([time.size])
# Give some small random initial conditions
y[0]=np.random.normal(size = 1) / 10
# Integrate the process
for i in range(1, time.size):
y[i] = y[i-1] - theta * y[i-1] * delta_t + sigma * dB[i]
###Output
_____no_output_____
###Markdown
Visualising the process
###Code
#This is the stochastic trajectory over time
plt.plot(time, y, label = r'Trajectory of fOU process')
plt.xlabel(r'time $t$')
plt.ylabel(r'$y(t)$')
plt.legend()
###Output
_____no_output_____
###Markdown
Employing MFDFA Here we will implement the MFDFA to try and recover the Hurst index $H$ for the generated fOU process.To employ MFDFA we will need a sequence of segment lengths, *lag*, and a selection of powers $q$. Let's start with $q=2$, which is simply DFA. Moreover we need to select the order of the polynomial fittings, which we'll take as straight lines, i.e., 1-st order polynomials.
###Code
# Select a band of lags, which usually ranges from
# very small segments of data, to very long ones, as
lag = np.logspace(0.7, 4, 30).astype(int)
# Notice these must be ints, since these will segment
# the data into chucks of lag size
# Select the power q
q = 2
# The order of the polynomial fitting
order = 1
# Obtain the (MF)DFA as
lag, dfa = MFDFA(y, lag = lag, q = q, order = order)
###Output
_____no_output_____
###Markdown
Understanding MFDFA To actually understand MFDFA we need to study the results in log-log plots and find some slopes
###Code
# To uncover the Hurst index, lets get some log-log plots
plt.loglog(lag, dfa, 'o', label='fOU: MFDFA q=2')
# And now we need to fit the line to find the slope. We will
# fit the first points, since the results are more accurate
# there. Don't forget that if you are seeing in log-log
# scales, you need to fit the logs of the results
np.polyfit(np.log(lag[:15]), np.log(dfa[:15]),1)[0]
# Now what you should obtain is: slope = H + 1
###Output
_____no_output_____
###Markdown
Why $H + 1$? Well, the Euler−Maruyama scheme literally integrates the process: Integration implies + 1 (it increases the regularity by 1).Curious about other processes? You can just input $dB_H$ into the MFDFA and see what you get (it will be $H$ the slope, since this is a *noise*). You can instead put np.cumsum($dB_H$) and you will get slope = $H+1$ (since it is a *motion*) Side note on fractional Gaussian noise Fractional Brownian motion is an extension of Brownian motion that allows to have correlations in the noise. You can visualise its effects and characteristics by simply considering $B_H(t)$ for different $H$ values. The regular Brownian motion has an $H$ index of $1/2$.
###Code
# Lets take three examples, with H=0.3, H=0.5, H=0.7
# The total integration time, as before
t_final = 500
# The desired timestep of integration
delta_t = 0.001
# time array of the process
time = np.linspace(0, t_final, t_final * int(1 / delta_t))
# Generate three fractional Gaussian noises dB
H_anti = 0.3 # Anti-presistent noise
H_regu = 0.5 # Regular noise
H_posi = 0.7 # Positively correlated noise
# Generate the noises (with the appropriate normalisation)
dB_anti = (t_final ** H_anti) * fgn(N = time.size, H = H_anti)
dB_regu = (t_final ** H_regu) * fgn(N = time.size, H = H_regu)
dB_posi = (t_final ** H_posi) * fgn(N = time.size, H = H_posi)
# Let's plot the noises, and the associated motions
fig, ax = plt.subplots(2,3, figsize=(12,4));
ax[0,0].plot(time, dB_anti)
ax[0,1].plot(time, dB_regu)
ax[0,2].plot(time, dB_posi)
# their motions are given by the integral of the noise,
# i.e., the cumsum of the process
ax[1,0].plot(time, np.cumsum(dB_anti))
ax[1,1].plot(time, np.cumsum(dB_regu))
ax[1,2].plot(time, np.cumsum(dB_posi))
###Output
_____no_output_____ |
Meter_interface_poc.ipynb | ###Markdown
Load Model
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from inference_poc import Inference
from interface import plot
model_path = 'PRETRAINED_MODELs/very_small/'
S = Inference(model_path)
S.sess = None
###Output
WARNING: Logging before flag parsing goes to stderr.
W1019 16:59:20.114181 4355685824 module_wrapper.py:139] From /Users/pasquini/Desktop/G/InterpretablePPSM/inference_poc.py:60: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
W1019 16:59:20.114948 4355685824 module_wrapper.py:139] From /Users/pasquini/Desktop/G/InterpretablePPSM/inference_poc.py:60: The name tf.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.
###Markdown
Improve your Password with a Neural Network:Use the variable $\texttt{password}$ in the cell bellow.Colors depict the security contribute of each character in the password:* Red equals insecure (You should change that character)* Green equals secure (You can keep it unchanged)
###Code
password = 'Ins#Cu%e_pass1'
plot(S, password, CC=True, P=True)
###Output
_____no_output_____ |
Self_training_hands_on.ipynb | ###Markdown
Here we import the important modules
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
We create a dataset of 2 half-moons, inner circle and outer circle
###Code
from sklearn.datasets import make_moons
n_samples = 1500
noise=.05
X, y = make_moons(n_samples=n_samples, shuffle=False, noise = noise)
outer, inner = 0, 1
labels = -np.ones(n_samples)
rand_idx = np.random.choice(n_samples, int(n_samples * 0.01), replace = False)
pos = rand_idx[rand_idx < 750]
neg = rand_idx[rand_idx > 750]
labels[pos] = outer
labels[neg] = inner
plt.figure(figsize=(6, 6))
plt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='#dddddd',
marker='.', label='unlabeled')
plt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='#4286f4',
marker='o', lw=0, label="outer labeled", s=50)
plt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='#0cff00',
marker='o', lw=0, label='inner labeled', s=50)
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Raw data (2 classes=outer and inner)")
###Output
_____no_output_____
###Markdown
Create a machine learning model and train it on the original dataset
###Code
# from sklearn import svm
# clf = svm.SVC(kernel='rbf', probability=True)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors = 5)
original_data = X[labels != -1]
original_labels = labels[labels != -1]
# TRAIN CLASSIFIER ON ORIGINAL DATA
clf.fit(original_data, original_labels)
plt.figure(figsize=(6, 6))
from matplotlib.colors import ListedColormap
h = .02
cmap_light = ListedColormap(['#bfe3ff', '#baffc1'])
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha = 0.3, edgecolors = 'face')
plt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='#dddddd',
marker='.', label='unlabeled')
plt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='#4286f4',
marker='o', lw=0, label="outer labeled", s=50)
plt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='#0cff00',
marker='o', lw=0, label='inner labeled', s=50)
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Raw data (2 classes=outer and inner)")
###Output
_____no_output_____
###Markdown
We create our semi-supervised model for self-training here -------------------------------------------------------------------------------
###Code
n_iter = 700
for i in range(0,n_iter):
# GET THE UNLABELED DATA
unlabeled_data = X[labels == -1]
probabilities = clf.predict_proba(unlabeled_data)
# FIND THE MOST POSITIVE AND MOST NEGATIVE EXAMPLE
maximum_index_positive = np.argmax(probabilities[:,0])
new_point_positive = np.expand_dims(unlabeled_data[maximum_index_positive,:], axis = 0)
maximum_index_negative = np.argmax(probabilities[:,1])
new_point_negative = np.expand_dims(unlabeled_data[maximum_index_negative,:], axis = 0)
# APPEND MOST POSITIVE AND MOST NEGATIVE EXAMPLE TO THE TRAINING SET
original_data = np.append(original_data, new_point_positive, axis = 0)
original_data = np.append(original_data, new_point_negative, axis = 0)
original_labels = np.append(original_labels,[0])
original_labels = np.append(original_labels,[1])
original_index_pos = X.tolist().index(new_point_positive.tolist()[0])
original_index_neg = X.tolist().index(new_point_negative.tolist()[0])
labels[original_index_pos] = 0
labels[original_index_neg] = 1
# RETRAIN THE CLASSIFIER WITH THE NEW DATA
clf.fit(original_data, original_labels)
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------
###Code
plt.figure(figsize=(6, 6))
plt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='#dddddd',
marker='.', label='unlabeled')
plt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='#4286f4',
marker='o', lw=0, label="outer labeled", s=50)
plt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='#0cff00',
marker='o', lw=0, label='inner labeled', s=50)
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Raw data (2 classes=outer and inner)")
###Output
_____no_output_____
###Markdown
Train the original classifier clf on the new dataset
###Code
plt.figure(figsize=(6, 6))
from matplotlib.colors import ListedColormap
h = .02
cmap_light = ListedColormap(['#bfe3ff', '#baffc1'])
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha = 0.3, edgecolors = 'face')
plt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='#dddddd',
marker='.', label='unlabeled')
plt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='#4286f4',
marker='o', lw=0, label="outer labeled", s=50)
plt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='#0cff00',
marker='o', lw=0, label='inner labeled', s=50)
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Raw data (2 classes=outer and inner)")
plt.subplots_adjust(left=0.07, bottom=0.07, right=0.93, top=0.92)
plt.show()
###Output
_____no_output_____ |
tutorials/streamlit_notebooks/ocr/DEID_PDF.ipynb | ###Markdown
Convert & View PDF as images
###Code
for image in PdfToImage().transform(pdf_example_df).collect():
#print(image.exception)
#print(image.metadata)
display_image(image.image)
###Output
_____no_output_____
###Markdown
3. Construct OCR and DEID (NLP) Pipelines De-identification Pipeline
###Code
def deidentification_nlp_pipeline(input_column, prefix = ""):
document_assembler = DocumentAssembler() \
.setInputCol(input_column) \
.setOutputCol(prefix + "document")
# Sentence Detector annotator, processes various sentences per line
sentence_detector = SentenceDetector() \
.setInputCols([prefix + "document"]) \
.setOutputCol(prefix + "sentence")
tokenizer = Tokenizer() \
.setInputCols([prefix + "sentence"]) \
.setOutputCol(prefix + "token")
# Clinical word embeddings
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \
.setInputCols([prefix + "sentence", prefix + "token"]) \
.setOutputCol(prefix + "embeddings")
# NER model trained on i2b2 (sampled from MIMIC) dataset
clinical_ner = NerDLModel.pretrained("ner_deid_large", "en", "clinical/models") \
.setInputCols([prefix + "sentence", prefix + "token", prefix + "embeddings"]) \
.setOutputCol(prefix + "ner")
custom_ner_converter = NerConverter() \
.setInputCols([prefix + "sentence", prefix + "token", prefix + "ner"]) \
.setOutputCol(prefix + "ner_chunk") \
.setWhiteList(['NAME', 'AGE', 'CONTACT',
'LOCATION', 'PROFESSION', 'PERSON'])
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
custom_ner_converter
])
empty_data = spark.createDataFrame([[""]]).toDF(input_column)
nlp_model = nlp_pipeline.fit(empty_data)
return nlp_model
###Output
_____no_output_____
###Markdown
OCR and PDF to Image Conversion Pipeline.
###Code
# Extract images from Dicom foram
# If text PDF extract text
pdf_to_text = PdfToText() \
.setInputCol("content") \
.setOutputCol("text") \
.setSplitPage(False)
# If image pdf, extract image
pdf_to_image = PdfToImage() \
.setInputCol("content") \
.setOutputCol("image_raw") \
.setKeepInput(True)
# Extract text from image
ocr = ImageToText() \
.setInputCol("image_raw") \
.setOutputCol("text") \
.setIgnoreResolution(False) \
.setOcrParams(["preserve_interword_spaces=0"])
# Find coordinates of sensitive data
position_finder = PositionFinder() \
.setInputCols("ner_chunk") \
.setOutputCol("coordinates") \
.setPageMatrixCol("positions") \
.setMatchingWindow(10) \
.setPadding(0)
# Draw filled rectangle to hide sensitive data
draw_regions = ImageDrawRegions() \
.setInputCol("image_raw") \
.setInputRegionsCol("coordinates") \
.setOutputCol("image_with_regions") \
.setFilledRect(True)
# Store image back to pdf
image_to_pdf = ImageToPdf() \
.setInputCol("image_with_regions") \
.setOutputCol("pdf")
# OCR pipeline
pipeline = PipelineModel(stages=[
pdf_to_text,
pdf_to_image,
ocr,
deidentification_nlp_pipeline(input_column="text"),
position_finder,
draw_regions,
image_to_pdf
])
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_deid_large download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
###Markdown
4. Run the pipelines and save De-identified PDF Document Run Pipeline
###Code
result = pipeline.transform(pdf_example_df).cache()
###Output
_____no_output_____
###Markdown
Save PDF
###Code
pdf = result.select("pdf").head().pdf
pdfFile = open("Result.pdf", "wb")
pdfFile.write(pdf)
pdfFile.close()
###Output
_____no_output_____
###Markdown
5. Load De-identified PDF and Visualize Results
###Code
pdf_example_df = spark.read.format("binaryFile").load("Result.pdf")
for image in PdfToImage().transform(pdf_example_df).collect():
#print(image.exception)
#print(image.metadata)
display_image(image.image)
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/ocr/DEID_PDF.ipynb) **De-identify PDF Documents**Deidentify text and metada To run this yourself, you will need to upload your **Spark OCR & Sprk NLP** license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Install correct version of Pillow and Restart runtime
###Code
# Install correct Pillow version
import PIL
if PIL.__version__ != '6.2.1':
print ('Installing correct version of Pillow. Kernel will restart automatically')
!pip install --upgrade pillow==6.2.1
# hard restart runtime
import os
os.kill(os.getpid(), 9)
else:
print ('Correct Pillow detected')
###Output
Correct Pillow detected
###Markdown
Read License Key
###Code
import os
import json
with open('workshop_license_keys.json') as f:
license_keys = json.load(f)
secret = license_keys['JSL_OCR_SECRET']
jsl_secret = license_keys['JSL_SECRET']
os.environ['SPARK_OCR_LICENSE'] = license_keys['SPARK_OCR_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['SPARK_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
version = secret.split("-")[0]
jsl_version = jsl_secret.split('-')[0]
print ('Spark OCR Version:', version)
print ('OCR Version:', version,)
print ('JSL Version:', jsl_version)
###Output
Spark OCR Version: 1.5.0
OCR Version: 1.5.0
JSL Version: 2.5.5
###Markdown
Install Dependencies
###Code
# Install Java
!apt-get update
!apt-get install -y openjdk-8-jdk
!java -version
# Install pyspark
!pip install --ignore-installed -q pyspark==2.4.4
# Install Spark OCR from PYPI using secret
!python -m pip install --upgrade spark-ocr==$version --extra-index-url https://pypi.johnsnowlabs.com/$secret
# Install Spark NLP and Spark NLP JSL
! pip install --ignore-installed -q spark-nlp
!python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$jsl_secret
###Output
_____no_output_____
###Markdown
Importing Libraries
###Code
import pandas as pd
import numpy as np
import os
#Pyspark Imports
from pyspark.sql import SparkSession
from pyspark.ml import PipelineModel
from pyspark.sql import functions as F
# Necessary imports from Spark OCR library
from sparkocr import start
from sparkocr.transformers import *
from sparkocr.enums import *
from sparkocr.utils import display_image, to_pil_image
from sparkocr.metrics import score
import pkg_resources
# import sparknlp packages
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp_jsl
from sparknlp_jsl.annotator import *
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
###Output
_____no_output_____
###Markdown
Start Spark Session
###Code
spark = start(secret=secret,
nlp_secret=jsl_secret,
nlp_version=jsl_version,
nlp_internal=True)
spark
###Output
_____no_output_____
###Markdown
2. Download and read PDF Document
###Code
pdf_example = pkg_resources.resource_filename('sparkocr', 'resources/ocr/pdfs/test_document.pdf')
pdf_example_df = spark.read.format("binaryFile").load(pdf_example).cache()
pdf_example_df.show()
###Output
+--------------------+-------------------+------+--------------------+
| path| modificationTime|length| content|
+--------------------+-------------------+------+--------------------+
|file:/usr/local/l...|2020-08-21 17:28:08|693743|[25 50 44 46 2D 3...|
+--------------------+-------------------+------+--------------------+
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/ocr/DEID_PDF.ipynb) **De-identify PDF Documents**Deidentify text and metada To run this yourself, you will need to upload your **Spark OCR & Sprk NLP** license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Install correct version of Pillow and Restart runtime
###Code
# Install correct Pillow version
import PIL
if PIL.__version__ != '6.2.1':
print ('Installing correct version of Pillow. Kernel will restart automatically')
!pip install --upgrade pillow==6.2.1
# hard restart runtime
import os
os.kill(os.getpid(), 9)
else:
print ('Correct Pillow detected')
###Output
_____no_output_____
###Markdown
Read License Key
###Code
import os
import json
with open('workshop_license_keys.json') as f:
license_keys = json.load(f)
secret = license_keys['JSL_OCR_SECRET']
jsl_secret = license_keys['JSL_SECRET']
os.environ['SPARK_OCR_LICENSE'] = license_keys['SPARK_OCR_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['SPARK_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
version = secret.split("-")[0]
jsl_version = jsl_secret.split('-')[0]
print ('Spark OCR Version:', version)
print ('OCR Version:', version,)
print ('JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install Dependencies
###Code
# Install Java
!apt-get update
!apt-get install -y openjdk-8-jdk
!java -version
# Install pyspark
!pip install --ignore-installed -q pyspark==2.4.4
# Install Spark OCR from PYPI using secret
!python -m pip install --upgrade spark-ocr==$version --extra-index-url https://pypi.johnsnowlabs.com/$secret
# Install Spark NLP and Spark NLP JSL
! pip install --ignore-installed -q spark-nlp
!python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$jsl_secret
###Output
_____no_output_____
###Markdown
Importing Libraries
###Code
import pandas as pd
import numpy as np
import os
#Pyspark Imports
from pyspark.sql import SparkSession
from pyspark.ml import PipelineModel
from pyspark.sql import functions as F
# Necessary imports from Spark OCR library
from sparkocr import start
from sparkocr.transformers import *
from sparkocr.enums import *
from sparkocr.utils import display_image, to_pil_image
from sparkocr.metrics import score
import pkg_resources
# import sparknlp packages
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp_jsl
from sparknlp_jsl.annotator import *
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
###Output
_____no_output_____
###Markdown
Start Spark Session
###Code
spark = start(secret=secret,
nlp_secret=jsl_secret,
nlp_version=jsl_version,
nlp_internal=True)
spark
###Output
_____no_output_____
###Markdown
2. Download and read PDF Document
###Code
pdf_example = pkg_resources.resource_filename('sparkocr', 'resources/ocr/pdfs/test_document.pdf')
pdf_example_df = spark.read.format("binaryFile").load(pdf_example).cache()
pdf_example_df.show()
###Output
_____no_output_____
###Markdown
Convert & View PDF as images
###Code
for image in PdfToImage().transform(pdf_example_df).collect():
#print(image.exception)
#print(image.metadata)
display_image(image.image)
###Output
_____no_output_____
###Markdown
3. Construct OCR and DEID (NLP) Pipelines De-identification Pipeline
###Code
def deidentification_nlp_pipeline(input_column, prefix = ""):
document_assembler = DocumentAssembler() \
.setInputCol(input_column) \
.setOutputCol(prefix + "document")
# Sentence Detector annotator, processes various sentences per line
sentence_detector = SentenceDetector() \
.setInputCols([prefix + "document"]) \
.setOutputCol(prefix + "sentence")
tokenizer = Tokenizer() \
.setInputCols([prefix + "sentence"]) \
.setOutputCol(prefix + "token")
# Clinical word embeddings
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \
.setInputCols([prefix + "sentence", prefix + "token"]) \
.setOutputCol(prefix + "embeddings")
# NER model trained on i2b2 (sampled from MIMIC) dataset
clinical_ner = NerDLModel.pretrained("ner_deid_large", "en", "clinical/models") \
.setInputCols([prefix + "sentence", prefix + "token", prefix + "embeddings"]) \
.setOutputCol(prefix + "ner")
custom_ner_converter = NerConverter() \
.setInputCols([prefix + "sentence", prefix + "token", prefix + "ner"]) \
.setOutputCol(prefix + "ner_chunk") \
.setWhiteList(['NAME', 'AGE', 'CONTACT',
'LOCATION', 'PROFESSION', 'PERSON'])
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
custom_ner_converter
])
empty_data = spark.createDataFrame([[""]]).toDF(input_column)
nlp_model = nlp_pipeline.fit(empty_data)
return nlp_model
###Output
_____no_output_____
###Markdown
OCR and PDF to Image Conversion Pipeline.
###Code
# Extract images from Dicom foram
# If text PDF extract text
pdf_to_text = PdfToText() \
.setInputCol("content") \
.setOutputCol("text") \
.setSplitPage(False)
# If image pdf, extract image
pdf_to_image = PdfToImage() \
.setInputCol("content") \
.setOutputCol("image_raw") \
.setKeepInput(True)
# Extract text from image
ocr = ImageToText() \
.setInputCol("image_raw") \
.setOutputCol("text") \
.setIgnoreResolution(False) \
.setOcrParams(["preserve_interword_spaces=0"])
# Find coordinates of sensitive data
position_finder = PositionFinder() \
.setInputCols("ner_chunk") \
.setOutputCol("coordinates") \
.setPageMatrixCol("positions") \
.setMatchingWindow(10) \
.setPadding(0)
# Draw filled rectangle to hide sensitive data
draw_regions = ImageDrawRegions() \
.setInputCol("image_raw") \
.setInputRegionsCol("coordinates") \
.setOutputCol("image_with_regions") \
.setFilledRect(True)
# Store image back to pdf
image_to_pdf = ImageToPdf() \
.setInputCol("image_with_regions") \
.setOutputCol("pdf")
# OCR pipeline
pipeline = PipelineModel(stages=[
pdf_to_text,
pdf_to_image,
ocr,
deidentification_nlp_pipeline(input_column="text"),
position_finder,
draw_regions,
image_to_pdf
])
###Output
_____no_output_____
###Markdown
4. Run the pipelines and save De-identified PDF Document Run Pipeline
###Code
result = pipeline.transform(pdf_example_df).cache()
###Output
_____no_output_____
###Markdown
Save PDF
###Code
pdf = result.select("pdf").head().pdf
pdfFile = open("Result.pdf", "wb")
pdfFile.write(pdf)
pdfFile.close()
###Output
_____no_output_____
###Markdown
5. Load De-identified PDF and Visualize Results
###Code
pdf_example_df = spark.read.format("binaryFile").load("Result.pdf")
for image in PdfToImage().transform(pdf_example_df).collect():
#print(image.exception)
#print(image.metadata)
display_image(image.image)
###Output
_____no_output_____ |
BC4_crypto_forecasting/scripts_updated/ETH_notebook.ipynb | ###Markdown
--> Forecasting - ETH Master Degree Program in Data Science and Advanced Analytics Business Cases with Data Science Project: > Group AA Done by:> - Beatriz Martins Selidónio Gomes, m20210545> - Catarina Inês Lopes Garcez, m20210547 > - Diogo André Domingues Pires, m20201076 > - Rodrigo Faísca Guedes, m20210587 --- Table of Content Import and Data Integration - [Import the needed Libraries](third-bullet) Data Exploration and Understanding - [Initial Analysis (EDA - Exploratory Data Analysis)](fifth-bullet) - [Variables Distribution](seventh-bullet) Data Preparation - [Data Transformation](eighth-bullet) Modelling - [Building LSTM Model](twentysecond-bullet) - [Get Best Parameters for LSTM](twentythird-bullet) - [Run the LSTM Model and Get Predictions](twentyfourth-bullet) - [Recursive Predictions](twentysixth-bullet) --- Import and Data Integration Import the needed Libraries [Back to TOC](toc)
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Exploration and Understanding Initial Analysis (EDA - Exploratory Data Analysis) [Back to TOC](toc)
###Code
df = pd.read_csv('../data/data_aux/df_ETH.csv')
df
###Output
_____no_output_____
###Markdown
Data Types
###Code
# Get to know the number of instances and Features, the DataTypes and if there are missing values in each Feature
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1826 entries, 0 to 1825
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 1826 non-null object
1 ETH-USD_ADJCLOSE 1629 non-null float64
2 ETH-USD_CLOSE 1629 non-null float64
3 ETH-USD_HIGH 1629 non-null float64
4 ETH-USD_LOW 1629 non-null float64
5 ETH-USD_OPEN 1629 non-null float64
6 ETH-USD_VOLUME 1629 non-null float64
dtypes: float64(6), object(1)
memory usage: 100.0+ KB
###Markdown
Missing Values
###Code
# Count the number of missing values for each Feature
df.isna().sum().to_frame().rename(columns={0: 'Count Missing Values'})
###Output
_____no_output_____
###Markdown
Descriptive Statistics
###Code
# Descriptive Statistics Table
df.describe().T
# settings to display all columns
pd.set_option("display.max_columns", None)
# display the dataframe head
df.sample(n=10)
#CHECK ROWS THAT HAVE ANY MISSING VALUE IN ONE OF THE COLUMNS
is_NaN = df.isnull()
row_has_NaN = is_NaN.any(axis=1)
rows_with_NaN = df[row_has_NaN]
rows_with_NaN
#FILTER OUT ROWS THAT ARE MISSING INFORMATION
df = df[~row_has_NaN]
df.reset_index(inplace=True, drop=True)
df
###Output
_____no_output_____
###Markdown
Data Preparation Data Transformation [Back to TOC](toc) __`Duplicates`__
###Code
# Checking if exist duplicated observations
print(f'\033[1m' + "Number of duplicates: " + '\033[0m', df.duplicated().sum())
###Output
[1mNumber of duplicates: [0m 0
###Markdown
__`Convert Date to correct format`__
###Code
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df
###Output
_____no_output_____
###Markdown
__`Get percentual difference between open and close values and low and high values`__
###Code
df['pctDiff_CloseOpen'] = abs((df[df.columns[2]]-df[df.columns[5]])/df[df.columns[2]])*100
df['pctDiff_HighLow'] = abs((df[df.columns[3]]-df[df.columns[4]])/df[df.columns[4]])*100
df.head()
def plot_coinValue(df):
#Get coin name
coin_name = df.columns[2].split('-')[0]
#Get date and coin value
x = df['Date']
y = df[df.columns[2]] # ADA-USD_CLOSE
#Get the volume of trades
v = df[df.columns[-3]]/1e9
#Get percentual diferences
y2 = df[df.columns[-1]] # pctDiff_HighLow
y1= df[df.columns[-2]] # pctDiff_CloseOpen
fig, axs = plt.subplots(3, 1, figsize=(12,14))
axs[0].plot(x, y)
axs[2].plot(x, v)
# plotting the line 1 points
axs[1].plot(x, y1, label = "Close/Open")
# plotting the line 2 points
axs[1].plot(x, y2, label = "High/Low")
axs[1].legend()
axs[0].title.set_text('Time Evolution of '+ coin_name)
axs[0].set(xlabel="", ylabel="Close Value in USD$")
axs[2].title.set_text('Volume of trades of '+ coin_name)
axs[2].set(xlabel="", ylabel="Total number of trades in billions")
axs[1].title.set_text('Daily Market percentual differences of '+ coin_name)
axs[1].set(xlabel="", ylabel="Percentage (%)")
plt.savefig('../analysis/'+coin_name +'_stats'+'.png')
return coin_name
coin_name = plot_coinValue(df)
#FILTER DATASET
df = df.loc[df['Date']>= '2021-01-01']
df
###Output
_____no_output_____
###Markdown
Modelling Building LSTM Model [Back to TOC](toc) StrategyCreate a DF (windowed_df) where the middle columns will correspond to the close values of X days before the target date and the final column will correspond to the close value of the target date. Use these values for prediction and play with the value of X
###Code
def get_windowed_df(X, df):
start_Date = df['Date'] + pd.Timedelta(days=X)
perm = np.zeros((1,X+1))
#Get labels for DataFrame
j=1
labels=[]
while j <= X:
label = 'closeValue_' + str(j) + 'daysBefore'
labels.append(label)
j+=1
labels.append('closeValue')
for i in range(X,df.shape[0]):
temp = np.zeros((1,X+1))
#Date for i-th day
#temp[0,0] = df.iloc[i]['Date']
#Close values for k days before
for k in range(X):
temp[0,k] = df.iloc[i-k-1,2]
#Close value for i-th date
temp[0,-1] = df.iloc[i,2]
#Add values to the permanent frame
perm = np.vstack((perm,temp))
#Get the array in dataframe form
windowed_df = pd.DataFrame(perm[1:,:], columns = labels)
return windowed_df
#Get the dataframe and append the dates
windowed_df = get_windowed_df(3, df)
windowed_df['Date'] = df.iloc[3:]['Date'].reset_index(drop=True)
windowed_df
#Get the X,y and dates into a numpy array to apply on a model
def windowed_df_to_date_X_y(windowed_dataframe):
df_as_np = windowed_dataframe.to_numpy()
dates = df_as_np[:, -1]
middle_matrix = df_as_np[:, 0:-2]
X = middle_matrix.reshape((len(dates), middle_matrix.shape[1], 1))
Y = df_as_np[:, -2]
return dates, X.astype(np.float32), Y.astype(np.float32)
dates, X, y = windowed_df_to_date_X_y(windowed_df)
dates.shape, X.shape, y.shape
#Partition for train, validation and test
q_80 = int(len(dates) * .85)
q_90 = int(len(dates) * .95)
dates_train, X_train, y_train = dates[:q_80], X[:q_80], y[:q_80]
dates_val, X_val, y_val = dates[q_80:q_90], X[q_80:q_90], y[q_80:q_90]
dates_test, X_test, y_test = dates[q_90:], X[q_90:], y[q_90:]
fig,axs = plt.subplots(1, 1, figsize=(12,5))
#Plot the partitions
axs.plot(dates_train, y_train)
axs.plot(dates_val, y_val)
axs.plot(dates_test, y_test)
axs.legend(['Train', 'Validation', 'Test'])
fig.savefig('../analysis/'+coin_name +'_partition'+'.png')
###Output
_____no_output_____
###Markdown
Get Best Parameters for LSTM [Back to TOC](toc)
###Code
#!pip install tensorflow
#import os
#os.environ['PYTHONHASHSEED']= '0'
#import numpy as np
#np.random.seed(1)
#import random as rn
#rn.seed(1)
#import tensorflow as tf
#tf.random.set_seed(1)
#
#from tensorflow.keras.models import Sequential
#from tensorflow.keras.optimizers import Adam
#from tensorflow.keras import layers
#from sklearn.metrics import mean_squared_error
#
## Function to create LSTM model and compute the MSE value for the given parameters
#def check_model(X_train, y_train, X_val, y_val, X_test, y_test, learning_rate,epoch,batch):
#
# # create model
# model = Sequential([layers.Input((3, 1)),
# layers.LSTM(64),
# layers.Dense(32, activation='relu'),
# layers.Dense(32, activation='relu'),
# layers.Dense(1)])
# # Compile model
# model.compile(loss='mse', optimizer=Adam(learning_rate=learning_rate), metrics=['mean_absolute_error'])
#
# model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=epoch, shuffle=False, batch_size=batch, verbose=2)
#
# test_predictions = model.predict(X_test).flatten()
#
# LSTM_mse = mean_squared_error(y_test, test_predictions)
#
# return LSTM_mse
#
##Function that iterates the different parameters and gets the ones corresponding to the lowest MSE score.
#def search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test):
#
# best_score = float('inf')
#
# for b in batch_size:
# for e in epochs:
# for l in learn_rate:
# print('Batch Size: ' + str(b))
# print('Number of Epochs: ' + str(e))
# print('Value of Learning Rate: ' + str(l))
# try:
# mse = check_model(X_train, y_train, X_val, y_val, X_test, y_test,l,e,b)
# print('MSE=%.3f' % (mse))
# if mse < best_score:
# best_score = mse
# top_params = [b, e, l]
# except:
# continue
#
# print('Best MSE=%.3f' % (best_score))
# print('Optimal Batch Size: ' + str(top_params[0]))
# print('Optimal Number of Epochs: ' + str(top_params[1]))
# print('Optimal Value of Learning Rate: ' + str(top_params[2]))
#
#
## define parameters
#batch_size = [10, 100, 1000]
#epochs = [50, 100]
#learn_rate = np.linspace(0.001,0.1, num=10)
#
#warnings.filterwarnings("ignore")
#search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test)
###Output
_____no_output_____
###Markdown
Run the LSTM Model and Get Predictions [Back to TOC](toc)
###Code
#BEST SOLUTION OF THE MODEL
# MSE=66761.977
# Batch Size: 1000
# Number of Epochs: 50
# Value of Learning Rate: 0.067
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
from sklearn.metrics import mean_squared_error
model = Sequential([layers.Input((3, 1)),
layers.LSTM(64),
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1)])
model.compile(loss='mse',
optimizer=Adam(learning_rate=0.067),
metrics=['mean_absolute_error'])
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=50, shuffle=False, batch_size=1000, verbose=2)
#PREDICT THE VALUES USING THE MODEL
train_predictions = model.predict(X_train).flatten()
val_predictions = model.predict(X_val).flatten()
test_predictions = model.predict(X_test).flatten()
fig,axs = plt.subplots(3, 1, figsize=(14,14))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].legend(['Training Predictions', 'Training Observations'])
axs[1].plot(dates_val, val_predictions)
axs[1].plot(dates_val, y_val)
axs[1].legend(['Validation Predictions', 'Validation Observations'])
axs[2].plot(dates_test, test_predictions)
axs[2].plot(dates_test, y_test)
axs[2].legend(['Testing Predictions', 'Testing Observations'])
plt.savefig('../analysis/LTSM_recursive/'+coin_name +'_modelPredictions'+'.png')
###Output
_____no_output_____
###Markdown
Recursive Predictions [Back to TOC](toc)
###Code
from copy import deepcopy
#Get prediction for future dates recursively based on the previous existing information. Then update the window of days upon
#which the predictions are made
recursive_predictions = []
recursive_dates = np.concatenate([dates_test])
last_window = deepcopy(X_train[-1])
for target_date in recursive_dates:
next_prediction = model.predict(np.array([last_window])).flatten()
recursive_predictions.append(next_prediction)
last_window = np.insert(last_window,0,next_prediction)[:-1]
fig,axs = plt.subplots(2, 1, figsize=(14,10))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].plot(dates_val, val_predictions)
axs[0].plot(dates_val, y_val)
axs[0].plot(dates_test, test_predictions)
axs[0].plot(dates_test, y_test)
axs[0].plot(recursive_dates, recursive_predictions)
axs[0].legend(['Training Predictions',
'Training Observations',
'Validation Predictions',
'Validation Observations',
'Testing Predictions',
'Testing Observations',
'Recursive Predictions'])
axs[1].plot(dates_test, y_test)
axs[1].plot(recursive_dates, recursive_predictions)
axs[1].legend(['Testing Observations',
'Recursive Predictions'])
plt.savefig('../analysis/LTSM_recursive/'+coin_name +'_recursivePredictions'+'.png')
may_10_prediction = coin_name +'-USD',recursive_predictions[-2][0]
may_10_prediction
###Output
_____no_output_____ |
practice/machine-learning/practice/2.2.1-perceptron.ipynb | ###Markdown
2장. 간단한 분류 알고리즘 훈련 watermark는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다.
###Code
!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,pandas,matplotlib
###Output
The watermark extension is already loaded. To reload it, use:
%reload_ext watermark
last updated: 2020-02-03
numpy 1.16.5
pandas 0.25.1
matplotlib 3.1.1
###Markdown
파이썬으로 퍼셉트론 학습 알고리즘 구현하기 객체 지향 퍼셉트론 API
###Code
import numpy as np
class Perceptron(object):
"""퍼셉트론 분류기
매개변수
------------
eta : float
학습률 (0.0과 1.0 사이)
n_iter : int
훈련 데이터셋 반복 횟수
random_state : int
가중치 무작위 초기화를 위한 난수 생성기 시드
속성
-----------
w_ : 1d-array
학습된 가중치
errors_ : list
에포크마다 누적된 분류 오류
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""훈련 데이터 학습
매개변수
----------
X : {array-like}, shape = [n_samples, n_features]
n_samples개의 샘플과 n_features개의 특성으로 이루어진 훈련 데이터
y : array-like, shape = [n_samples]
타깃값
반환값
-------
self : object
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""최종 입력 계산"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""단위 계단 함수를 사용하여 클래스 레이블을 반환합니다"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
###Output
_____no_output_____ |
projects/imaterialist_2018/notebooks/iMaterlialist_Keras_ResNet50.ipynb | ###Markdown
iMaterialist Challenge (Furniture) at FGVC5 TFNW Kaggle Team Train a ResNet50 networkhttps://www.kaggle.com/c/imaterialist-challenge-furniture-2018@alkari *Restart runtime*
###Code
#!kill -9 -1
###Output
_____no_output_____
###Markdown
Check GPU/Memory
###Code
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print('Gen RAM Free: ' + humanize.naturalsize( psutil.virtual_memory().available ), ' I Proc size: ' + humanize.naturalsize( process.memory_info().rss))
print('GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB'.format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
###Output
_____no_output_____
###Markdown
Install pre-requisites
###Code
! pip install --upgrade -q pydot
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
import tensorflow as tf
tf.test.gpu_device_name()
###Output
_____no_output_____
###Markdown
Start here...
###Code
# Helper functions
def elapsed (start):
"""
Returns elapsed time in hh:mm:ss format from start time in unix format
"""
elapsed = time.time()-start
hours, rem = divmod(elapsed, 3600)
minutes, seconds = divmod(rem, 60)
return("{:0>2}:{:0>2}:{:05.2f}".format(int(hours),int(minutes),seconds))
import os
os.chdir('/content/drive')
import time
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
# Load Classes
test_path="iMaterialist/validation_dataset/"
test_dataset_name = 'validation_last'
with h5py.File(test_path+'{}_labels.h5'.format(test_dataset_name), 'r') as hf:
test_set_y_orig = np.array(hf['{}_labels'.format(test_dataset_name)][:])
classes = []
for i in range (1,len(test_set_y_orig)):
if test_set_y_orig[i] not in classes:
classes.append(test_set_y_orig[i])
classes = np.array(classes) # the list of classes
print(classes.shape)
test_set_y_orig = None
def load_dataset(train_path, train_dataset_name, batch_size=1000):
#train_path="iMaterialist/train_dataset/"
#test_path="iMaterialist/validation_dataset/"
#train_dataset_name = 'train_1'
#test_dataset_name = 'validation_last'
# Train dataset
with h5py.File(train_path+'{}_images.h5'.format(train_dataset_name), 'r') as hf:
train_set_x_orig = np.array(hf['{}_images'.format(train_dataset_name)][batch_size-1000:batch_size])
with h5py.File(train_path+'{}_labels.h5'.format(train_dataset_name), 'r') as hf:
train_set_y_orig = np.array(hf['{}_labels'.format(train_dataset_name)][batch_size-1000:batch_size])
# Test dataset (validation)
#with h5py.File(test_path+'{}_images.h5'.format(test_dataset_name), 'r') as hf:
# test_set_x_orig = np.array(hf['{}_images'.format(test_dataset_name)][batch_size-1000:batch_size])
#with h5py.File(test_path+'{}_labels.h5'.format(test_dataset_name), 'r') as hf:
# test_set_y_orig = np.array(hf['{}_labels'.format(test_dataset_name)][batch_size-1000:batch_size])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
#test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
#return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
return train_set_x_orig, train_set_y_orig, classes
!free -m
def identity_block(X, f, filters, stage, block):
"""
Implementation of ResNet identity block
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters= F2, kernel_size= (f,f), strides= (1,1), padding= 'same', name= conv_name_base + '2b', kernel_initializer= glorot_uniform(seed=0))(X)
X = BatchNormalization(axis= 3, name= bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters= F3, kernel_size= (1,1), strides= (1,1), padding= 'valid', name= conv_name_base + '2c', kernel_initializer= glorot_uniform(seed=0))(X)
X = BatchNormalization(axis= 3, name= bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of ResNet convolutional block
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(F2, (f,f), strides= (1,1), padding= 'same', name= conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis= 3, name= bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(F3, (1,1), strides=(1,1), padding='valid', name= conv_name_base +'2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis= 3, name= bn_name_base +'2c')(X)
##### SHORTCUT PATH ####
X_shortcut = Conv2D(F3, (1,1), strides=(s,s), padding='valid', name= conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name= bn_name_base +'1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def ResNet50(input_shape = (300, 300, 3), classes = 129):
"""
Implementation of ResNet50 with the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
# Stage 3
X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
# Stage 4
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL
X = AveragePooling2D(pool_size=(2, 2))(X)
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
###Output
_____no_output_____
###Markdown
Build the model
###Code
model = ResNet50(input_shape = (300, 300, 3), classes = 129)
###Output
_____no_output_____
###Markdown
Compile the model
###Code
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The model is now ready to be trained. The only thing you need is a dataset.
###Code
train_path="iMaterialist/train_dataset/"
test_path="iMaterialist/validation_dataset/"
#train_dataset_name = 'train_2'
test_dataset_name = 'validation_last'
load_batch_size = 1000
assert load_batch_size == 1000
###Output
_____no_output_____
###Markdown
Begin training
###Code
start = time.time()
for dataset_number in range (1,10):
train_dataset_name = "train_{}".format(dataset_number)
for batch in range(load_batch_size,5001,load_batch_size):
X_train_orig, Y_train_orig, classes = load_dataset(train_path, train_dataset_name, batch)
# Normalize image vectors
X_train = X_train_orig/255.
#X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 129).T
#Y_test = convert_to_one_hot(Y_test_orig, 129).T
#print ("number of training examples = " + str(X_train.shape[0]))
#print ("number of test examples = " + str(X_test.shape[0]))
#print ("X_train shape: " + str(X_train.shape))
#print ("Y_train shape: " + str(Y_train.shape))
#print ("X_test shape: " + str(X_test.shape))
#print ("Y_test shape: " + str(Y_test.shape))
print('\nTraining dataset {}'.format(dataset_number))
print("\n*****Training batch# {}".format(batch)+"*****\n")
model.fit(X_train, Y_train, epochs = 20, batch_size = 50)
print('\n-------------------------- Elapsed time: {} --------------------------'.format(elapsed(start)))
model.save('iMaterlialist-Keras-ResNet50-{}.h5'.format(dataset_number))
print('\nCheckpoint saved. Elapsed time: {}'.format(elapsed(start)))
model.save('iMaterlialist-Keras-ResNet50.h5')
#model.fit(X_train, Y_train, epochs = 20, batch_size = 64)
###Output
_____no_output_____
###Markdown
Try this model on the test set.
###Code
model = load_model('iMaterlialist-Keras-ResNet50-3.h5')
def load_test_dataset(test_path, test_dataset_name):
# Test dataset (validation)
with h5py.File(test_path+'{}_images.h5'.format(test_dataset_name), 'r') as hf:
test_set_x_orig = np.array(hf['{}_images'.format(test_dataset_name)][:500])
with h5py.File(test_path+'{}_labels.h5'.format(test_dataset_name), 'r') as hf:
test_set_y_orig = np.array(hf['{}_labels'.format(test_dataset_name)][:500])
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
X_test = test_set_x_orig/255.
Y_test = convert_to_one_hot(test_set_y_orig, 129).T
return X_test, Y_test
test_path="iMaterialist/validation_dataset/"
test_dataset_name = 'validation_last'
X_test, Y_test = load_test_dataset(test_path, test_dataset_name)
start = time.time()
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
print('\nElapsed time: {}'.format(elapsed(start)))
###Output
500/500 [==============================] - 453s 906ms/step
Loss = 8.32625260925293
Test Accuracy = 0.10600000002980232
Elapsed time: 00:07:32.90
###Markdown
Test on your own image You can upload an image and see the output of the model. To do this: 1. Click on "File" in the upper bar of this notebook, then click "Open". 2. Add your image to this Jupyter Notebook's directory 3. Write your image's name in the following code 4. Predict!
###Code
img_path = 'my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction = ")
print(model.predict(x))
###Output
_____no_output_____
###Markdown
You can also print a summary of your model by running the following code.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Visualize this ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
###Code
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
References This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet: - Coursera Convolusional Neural Networks - deeplearning.ai- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
###Code
! rm -rf /swapfile
#!dd if=/dev/zero of=/swapfile bs=1G count=20
!fallocate -l 20G /swapfile
!chmod 600 /swapfile
!ls -lh /swapfile
!mkswap /swapfile
!swapon /swapfile
!sysctl vm.swappiness=10
!sysctl vm.vfs_cache_pressure=60
!swapon -s
!free -m
###Output
_____no_output_____ |
gdp_life_satisfaction/gdp_life_satisfaction.ipynb | ###Markdown
GDP vs Life Satisfaction Regression Model Examples from [Chapter 01 (ML and DP in python)](https://github.com/ageron/handson-ml/blob/master/01_the_machine_learning_landscape.ipynb)
###Code
from __future__ import division, print_function, unicode_literals
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import os
import sklearn.linear_model
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"] == "TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
datapath = os.path.join("datasets", "lifesat", "")
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.xls",
thousands=',',
delimiter='\t',
encoding='latin1',
na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
country_stats.head()
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
cyprus_predicted_life_satisfaction = model.predict(cyprus_gdp_per_capita)[0][0]
print("Cyprus - GPD per capita: {0}, Predicted life satisfaction: {1}".format(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction))
plt.text(25000, 5.0, r"Prediction = {0:.2f}".format(cyprus_predicted_life_satisfaction), fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
plt.show()
t0, t1 = model.intercept_[0], model.coef_[0][0]
t0, t1
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 5.29$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 3.93 \times 10^{-5}$", fontsize=14, color="b")
plt.text(25000, 5.0, r"Prediction = {0:.2f}".format(cyprus_predicted_life_satisfaction), fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
plt.show()
plt.figure(figsize=(8,3))
plt.plot(list(country_stats["GDP per capita"]), list(country_stats["Life satisfaction"]), "bo")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = sklearn.linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[country_stats["GDP per capita"]]
ysample = np.c_[country_stats["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
ridge_pred = ridge.predict(cyprus_gdp_per_capita)[0][0]
plt.plot(cyprus_gdp_per_capita, ridge_pred, "mo", label="Cyprus Prediction")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
plt.show()
# kne
X = Xsample
Y = ysample
knn = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
knn.fit(X, Y)
knn_pred = knn.predict(cyprus_gdp_per_capita)[0][0]
plt.plot(cyprus_gdp_per_capita, knn_pred, "mo", label="Cyprus Prediction (k-nearest-neighbours)")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
plt.show()
x_min, x_max = 0, X.max() + .5
y_min, y_max = 0, Y.max() + .5
###Output
_____no_output_____ |
Generate Faces/dlnd_face_generation.ipynb | ###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor()
])
dataset = datasets.ImageFolder(data_dir, transform=transform)
# TODO: Implement function and return a dataloader
data_loader = torch.utils.data.DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=True)
return data_loader
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters
batch_size = 128
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
x = x * (max - min) + min
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
Min: tensor(-0.9843)
Max: tensor(0.7882)
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
# Helper fun
"""
Create a convolutional layer, with optional batch normalization.
"""
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
self.conv_dim = conv_dim
# 32x32 input
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # first layer, no batch_norm
# 16x16 output
self.conv2 = conv(conv_dim, conv_dim*2, 4)
# 8x8 output
self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
# 4x4 output
self.conv4 = conv(conv_dim*4, conv_dim*8, 4)
# 2x2 output
# final, fully-connected layer
self.fc = nn.Linear(conv_dim*8*2*2, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
# all hidden layers + leaky relu activation
out = F.leaky_relu(self.conv1(x), 0.2)
out = F.leaky_relu(self.conv2(out), 0.2)
out = F.leaky_relu(self.conv3(out), 0.2)
out = F.leaky_relu(self.conv4(out), 0.2)
# flatten
out = out.view(-1, self.conv_dim*8*2*2)
# final output layer
out = self.fc(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
Tests Passed
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
layers=[]
transpose_conv_layer = nn.ConvTranspose2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append transpose convolutional layer
layers.append(transpose_conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.conv_dim = conv_dim
# first, fully-connected layer
self.fc = nn.Linear(z_size, conv_dim*8*2*2)
# transpose conv layers
self.deconv1 = deconv(conv_dim*8, conv_dim*4, 4)
self.deconv2 = deconv(conv_dim*4, conv_dim*2, 4)
self.deconv3 = deconv(conv_dim*2, conv_dim, 4)
self.deconv4 = deconv(conv_dim, 3, 4, batch_norm=False)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
out = self.fc(x)
out = out.view(-1, self.conv_dim*8, 2, 2) # (batch_size, depth, 4, 4)
# hidden transpose conv layers + relu
out = F.relu(self.deconv1(out))
out = F.relu(self.deconv2(out))
out = F.relu(self.deconv3(out))
# last layer + tanh activation
out = self.deconv4(out)
out = torch.tanh(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
###Output
Tests Passed
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
m.weight.data.normal_(0.0, 0.02)
if hasattr(m, 'bias') and m.bias is not None:
m.bias.data.zero_()
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
Discriminator(
(conv1): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
(conv2): Sequential(
(0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv3): Sequential(
(0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv4): Sequential(
(0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(fc): Linear(in_features=1024, out_features=1, bias=True)
)
Generator(
(fc): Linear(in_features=100, out_features=1024, bias=True)
(deconv1): Sequential(
(0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(deconv2): Sequential(
(0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(deconv3): Sequential(
(0): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(deconv4): Sequential(
(0): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
)
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
Training on GPU!
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
batch_size = D_out.size(0)
labels = torch.ones(batch_size) * 0.9
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), lr=0.0002, betas=(0.5, 0.999))
g_optimizer = optim.Adam(G.parameters(), lr=0.0002, betas=(0.5, 0.999))
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
d_optimizer.zero_grad()
if train_on_gpu:
real_images = real_images.cuda()
# 1. Train the discriminator on real and fake images
# Compute the discriminator losses on real images
D_real = D(real_images)
d_real_loss = real_loss(D_real)
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
# perfom backprop
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
Epoch [ 1/ 25] | d_loss: 1.4017 | g_loss: 0.8240
Epoch [ 1/ 25] | d_loss: 0.3920 | g_loss: 3.5901
Epoch [ 1/ 25] | d_loss: 0.4843 | g_loss: 2.8328
Epoch [ 1/ 25] | d_loss: 0.5643 | g_loss: 2.7544
Epoch [ 1/ 25] | d_loss: 1.1868 | g_loss: 2.3404
Epoch [ 1/ 25] | d_loss: 0.4945 | g_loss: 2.6309
Epoch [ 1/ 25] | d_loss: 2.2459 | g_loss: 1.3849
Epoch [ 1/ 25] | d_loss: 0.7223 | g_loss: 1.1103
Epoch [ 1/ 25] | d_loss: 0.8822 | g_loss: 1.7905
Epoch [ 1/ 25] | d_loss: 0.7244 | g_loss: 2.1008
Epoch [ 1/ 25] | d_loss: 0.8887 | g_loss: 2.2484
Epoch [ 1/ 25] | d_loss: 0.8956 | g_loss: 1.9097
Epoch [ 1/ 25] | d_loss: 1.0191 | g_loss: 3.1118
Epoch [ 1/ 25] | d_loss: 0.9009 | g_loss: 2.3675
Epoch [ 1/ 25] | d_loss: 1.2762 | g_loss: 2.6951
Epoch [ 2/ 25] | d_loss: 1.0838 | g_loss: 2.4732
Epoch [ 2/ 25] | d_loss: 0.9553 | g_loss: 2.2299
Epoch [ 2/ 25] | d_loss: 0.9487 | g_loss: 2.2886
Epoch [ 2/ 25] | d_loss: 1.1179 | g_loss: 3.4549
Epoch [ 2/ 25] | d_loss: 0.8968 | g_loss: 1.7442
Epoch [ 2/ 25] | d_loss: 1.0685 | g_loss: 1.3667
Epoch [ 2/ 25] | d_loss: 0.9840 | g_loss: 1.5810
Epoch [ 2/ 25] | d_loss: 1.0385 | g_loss: 1.3216
Epoch [ 2/ 25] | d_loss: 0.9200 | g_loss: 1.9681
Epoch [ 2/ 25] | d_loss: 0.9881 | g_loss: 2.0817
Epoch [ 2/ 25] | d_loss: 1.0273 | g_loss: 1.5179
Epoch [ 2/ 25] | d_loss: 1.0075 | g_loss: 1.4413
Epoch [ 2/ 25] | d_loss: 1.0176 | g_loss: 1.3607
Epoch [ 2/ 25] | d_loss: 1.0268 | g_loss: 1.2687
Epoch [ 2/ 25] | d_loss: 0.9011 | g_loss: 1.6347
Epoch [ 3/ 25] | d_loss: 0.9923 | g_loss: 1.2541
Epoch [ 3/ 25] | d_loss: 0.8825 | g_loss: 1.5616
Epoch [ 3/ 25] | d_loss: 0.8468 | g_loss: 1.6005
Epoch [ 3/ 25] | d_loss: 0.9571 | g_loss: 1.2879
Epoch [ 3/ 25] | d_loss: 1.0586 | g_loss: 1.4375
Epoch [ 3/ 25] | d_loss: 1.1067 | g_loss: 2.3016
Epoch [ 3/ 25] | d_loss: 1.0617 | g_loss: 2.1831
Epoch [ 3/ 25] | d_loss: 0.8865 | g_loss: 1.9292
Epoch [ 3/ 25] | d_loss: 1.0941 | g_loss: 1.2752
Epoch [ 3/ 25] | d_loss: 1.0220 | g_loss: 1.6353
Epoch [ 3/ 25] | d_loss: 1.0054 | g_loss: 2.0263
Epoch [ 3/ 25] | d_loss: 1.1771 | g_loss: 1.6713
Epoch [ 3/ 25] | d_loss: 0.8896 | g_loss: 1.2952
Epoch [ 3/ 25] | d_loss: 0.8405 | g_loss: 1.3286
Epoch [ 3/ 25] | d_loss: 0.9420 | g_loss: 1.5406
Epoch [ 4/ 25] | d_loss: 1.0449 | g_loss: 1.0458
Epoch [ 4/ 25] | d_loss: 0.9996 | g_loss: 1.4505
Epoch [ 4/ 25] | d_loss: 0.9034 | g_loss: 1.5070
Epoch [ 4/ 25] | d_loss: 0.9187 | g_loss: 1.5071
Epoch [ 4/ 25] | d_loss: 0.9793 | g_loss: 1.2045
Epoch [ 4/ 25] | d_loss: 1.0589 | g_loss: 2.2188
Epoch [ 4/ 25] | d_loss: 0.9401 | g_loss: 1.4635
Epoch [ 4/ 25] | d_loss: 0.9730 | g_loss: 1.6777
Epoch [ 4/ 25] | d_loss: 0.9179 | g_loss: 1.3819
Epoch [ 4/ 25] | d_loss: 0.9845 | g_loss: 1.9564
Epoch [ 4/ 25] | d_loss: 0.9323 | g_loss: 1.4328
Epoch [ 4/ 25] | d_loss: 0.8579 | g_loss: 1.5617
Epoch [ 4/ 25] | d_loss: 1.0741 | g_loss: 0.9951
Epoch [ 4/ 25] | d_loss: 1.1126 | g_loss: 2.2378
Epoch [ 4/ 25] | d_loss: 0.9571 | g_loss: 1.2116
Epoch [ 5/ 25] | d_loss: 1.0359 | g_loss: 2.3732
Epoch [ 5/ 25] | d_loss: 0.9267 | g_loss: 1.7315
Epoch [ 5/ 25] | d_loss: 0.7832 | g_loss: 1.7348
Epoch [ 5/ 25] | d_loss: 0.9253 | g_loss: 1.8126
Epoch [ 5/ 25] | d_loss: 0.8405 | g_loss: 1.8326
Epoch [ 5/ 25] | d_loss: 0.8355 | g_loss: 2.0080
Epoch [ 5/ 25] | d_loss: 1.4027 | g_loss: 1.2110
Epoch [ 5/ 25] | d_loss: 1.1818 | g_loss: 0.6832
Epoch [ 5/ 25] | d_loss: 0.8815 | g_loss: 1.2160
Epoch [ 5/ 25] | d_loss: 0.9310 | g_loss: 1.7344
Epoch [ 5/ 25] | d_loss: 0.8842 | g_loss: 1.3244
Epoch [ 5/ 25] | d_loss: 0.8882 | g_loss: 1.9160
Epoch [ 5/ 25] | d_loss: 1.0635 | g_loss: 1.4826
Epoch [ 5/ 25] | d_loss: 0.8108 | g_loss: 2.0968
Epoch [ 5/ 25] | d_loss: 0.7783 | g_loss: 1.8763
Epoch [ 6/ 25] | d_loss: 0.9138 | g_loss: 1.3630
Epoch [ 6/ 25] | d_loss: 0.8309 | g_loss: 1.4191
Epoch [ 6/ 25] | d_loss: 0.6590 | g_loss: 2.2078
Epoch [ 6/ 25] | d_loss: 0.6835 | g_loss: 2.0922
Epoch [ 6/ 25] | d_loss: 0.9801 | g_loss: 1.3012
Epoch [ 6/ 25] | d_loss: 0.9300 | g_loss: 1.3806
Epoch [ 6/ 25] | d_loss: 0.7444 | g_loss: 1.5794
Epoch [ 6/ 25] | d_loss: 0.9244 | g_loss: 1.5313
Epoch [ 6/ 25] | d_loss: 0.8832 | g_loss: 1.5948
Epoch [ 6/ 25] | d_loss: 0.8986 | g_loss: 1.3231
Epoch [ 6/ 25] | d_loss: 0.8009 | g_loss: 1.6687
Epoch [ 6/ 25] | d_loss: 0.9637 | g_loss: 1.1782
Epoch [ 6/ 25] | d_loss: 0.8939 | g_loss: 1.0398
Epoch [ 6/ 25] | d_loss: 0.9505 | g_loss: 1.2516
Epoch [ 6/ 25] | d_loss: 0.9271 | g_loss: 2.2014
Epoch [ 7/ 25] | d_loss: 0.8128 | g_loss: 1.3697
Epoch [ 7/ 25] | d_loss: 0.8458 | g_loss: 1.6008
Epoch [ 7/ 25] | d_loss: 0.8733 | g_loss: 1.5506
Epoch [ 7/ 25] | d_loss: 0.8927 | g_loss: 2.2085
Epoch [ 7/ 25] | d_loss: 0.9954 | g_loss: 1.1947
Epoch [ 7/ 25] | d_loss: 0.8642 | g_loss: 1.6361
Epoch [ 7/ 25] | d_loss: 1.2503 | g_loss: 3.3174
Epoch [ 7/ 25] | d_loss: 0.9093 | g_loss: 1.8980
Epoch [ 7/ 25] | d_loss: 1.0054 | g_loss: 1.1835
Epoch [ 7/ 25] | d_loss: 1.0695 | g_loss: 1.1531
Epoch [ 7/ 25] | d_loss: 0.8442 | g_loss: 1.3583
Epoch [ 7/ 25] | d_loss: 1.0702 | g_loss: 2.3075
Epoch [ 7/ 25] | d_loss: 0.9951 | g_loss: 1.3437
Epoch [ 7/ 25] | d_loss: 0.8768 | g_loss: 1.6880
Epoch [ 7/ 25] | d_loss: 0.9211 | g_loss: 2.4042
Epoch [ 8/ 25] | d_loss: 0.7697 | g_loss: 1.6649
Epoch [ 8/ 25] | d_loss: 0.8488 | g_loss: 1.5625
Epoch [ 8/ 25] | d_loss: 0.7202 | g_loss: 1.8980
Epoch [ 8/ 25] | d_loss: 0.9248 | g_loss: 2.6384
Epoch [ 8/ 25] | d_loss: 0.9865 | g_loss: 1.0815
Epoch [ 8/ 25] | d_loss: 0.8129 | g_loss: 1.7970
Epoch [ 8/ 25] | d_loss: 0.7630 | g_loss: 1.7534
Epoch [ 8/ 25] | d_loss: 0.7878 | g_loss: 1.7606
Epoch [ 8/ 25] | d_loss: 1.4511 | g_loss: 3.8398
Epoch [ 8/ 25] | d_loss: 0.8058 | g_loss: 1.4642
Epoch [ 8/ 25] | d_loss: 0.7030 | g_loss: 1.6117
Epoch [ 8/ 25] | d_loss: 0.9672 | g_loss: 2.6322
Epoch [ 8/ 25] | d_loss: 0.7697 | g_loss: 1.3687
Epoch [ 8/ 25] | d_loss: 0.9436 | g_loss: 1.2955
Epoch [ 8/ 25] | d_loss: 0.8838 | g_loss: 1.7467
Epoch [ 9/ 25] | d_loss: 0.9138 | g_loss: 2.2981
Epoch [ 9/ 25] | d_loss: 0.7939 | g_loss: 1.8807
Epoch [ 9/ 25] | d_loss: 0.8158 | g_loss: 1.1099
Epoch [ 9/ 25] | d_loss: 0.8619 | g_loss: 1.6443
Epoch [ 9/ 25] | d_loss: 0.9305 | g_loss: 1.4885
Epoch [ 9/ 25] | d_loss: 0.7780 | g_loss: 2.1095
Epoch [ 9/ 25] | d_loss: 1.7638 | g_loss: 2.5004
Epoch [ 9/ 25] | d_loss: 0.7604 | g_loss: 1.3182
Epoch [ 9/ 25] | d_loss: 0.7628 | g_loss: 2.0059
Epoch [ 9/ 25] | d_loss: 0.8482 | g_loss: 1.5403
Epoch [ 9/ 25] | d_loss: 1.0292 | g_loss: 1.2811
Epoch [ 9/ 25] | d_loss: 0.8847 | g_loss: 2.1265
Epoch [ 9/ 25] | d_loss: 0.9379 | g_loss: 2.2344
Epoch [ 9/ 25] | d_loss: 1.2204 | g_loss: 3.4884
Epoch [ 9/ 25] | d_loss: 0.7474 | g_loss: 1.4356
Epoch [ 10/ 25] | d_loss: 0.7717 | g_loss: 1.3273
Epoch [ 10/ 25] | d_loss: 0.8134 | g_loss: 1.4904
Epoch [ 10/ 25] | d_loss: 0.6031 | g_loss: 2.7770
Epoch [ 10/ 25] | d_loss: 0.8936 | g_loss: 2.2628
Epoch [ 10/ 25] | d_loss: 0.8239 | g_loss: 1.4523
Epoch [ 10/ 25] | d_loss: 0.7300 | g_loss: 1.4358
Epoch [ 10/ 25] | d_loss: 1.0282 | g_loss: 1.1617
Epoch [ 10/ 25] | d_loss: 0.8213 | g_loss: 1.5308
Epoch [ 10/ 25] | d_loss: 0.8667 | g_loss: 1.8665
Epoch [ 10/ 25] | d_loss: 0.9044 | g_loss: 1.3212
Epoch [ 10/ 25] | d_loss: 0.9531 | g_loss: 1.2502
Epoch [ 10/ 25] | d_loss: 0.7663 | g_loss: 1.9050
Epoch [ 10/ 25] | d_loss: 0.8165 | g_loss: 1.6030
Epoch [ 10/ 25] | d_loss: 1.3498 | g_loss: 1.0859
Epoch [ 10/ 25] | d_loss: 1.0150 | g_loss: 3.2105
Epoch [ 11/ 25] | d_loss: 0.9370 | g_loss: 1.0007
Epoch [ 11/ 25] | d_loss: 1.0201 | g_loss: 1.8877
Epoch [ 11/ 25] | d_loss: 0.9623 | g_loss: 1.3412
Epoch [ 11/ 25] | d_loss: 0.7973 | g_loss: 1.7140
Epoch [ 11/ 25] | d_loss: 0.6673 | g_loss: 1.5654
Epoch [ 11/ 25] | d_loss: 0.7868 | g_loss: 1.9249
Epoch [ 11/ 25] | d_loss: 1.0529 | g_loss: 2.5728
Epoch [ 11/ 25] | d_loss: 0.7598 | g_loss: 1.5258
Epoch [ 11/ 25] | d_loss: 0.9352 | g_loss: 1.5244
Epoch [ 11/ 25] | d_loss: 0.8508 | g_loss: 2.9033
Epoch [ 11/ 25] | d_loss: 0.7446 | g_loss: 0.8261
Epoch [ 11/ 25] | d_loss: 0.7628 | g_loss: 1.8440
Epoch [ 11/ 25] | d_loss: 0.6926 | g_loss: 2.4341
Epoch [ 11/ 25] | d_loss: 0.9259 | g_loss: 1.2929
Epoch [ 11/ 25] | d_loss: 0.9929 | g_loss: 1.7931
Epoch [ 12/ 25] | d_loss: 0.8656 | g_loss: 1.5312
Epoch [ 12/ 25] | d_loss: 0.6857 | g_loss: 1.5955
Epoch [ 12/ 25] | d_loss: 0.8474 | g_loss: 2.8130
Epoch [ 12/ 25] | d_loss: 0.7114 | g_loss: 1.9240
Epoch [ 12/ 25] | d_loss: 0.6860 | g_loss: 2.5097
Epoch [ 12/ 25] | d_loss: 0.8301 | g_loss: 1.8174
Epoch [ 12/ 25] | d_loss: 0.7235 | g_loss: 1.5464
Epoch [ 12/ 25] | d_loss: 0.7370 | g_loss: 1.9516
Epoch [ 12/ 25] | d_loss: 0.9636 | g_loss: 1.1818
Epoch [ 12/ 25] | d_loss: 0.8426 | g_loss: 1.1686
Epoch [ 12/ 25] | d_loss: 0.9596 | g_loss: 1.4372
Epoch [ 12/ 25] | d_loss: 0.6809 | g_loss: 2.2693
Epoch [ 12/ 25] | d_loss: 0.7626 | g_loss: 1.5456
Epoch [ 12/ 25] | d_loss: 1.0247 | g_loss: 3.5531
Epoch [ 12/ 25] | d_loss: 0.7126 | g_loss: 1.8501
Epoch [ 13/ 25] | d_loss: 0.9357 | g_loss: 3.2227
Epoch [ 13/ 25] | d_loss: 0.7319 | g_loss: 1.3249
Epoch [ 13/ 25] | d_loss: 0.7915 | g_loss: 1.4375
Epoch [ 13/ 25] | d_loss: 0.8076 | g_loss: 1.4473
Epoch [ 13/ 25] | d_loss: 0.6260 | g_loss: 2.7346
Epoch [ 13/ 25] | d_loss: 0.6194 | g_loss: 2.3462
Epoch [ 13/ 25] | d_loss: 0.7805 | g_loss: 1.0057
Epoch [ 13/ 25] | d_loss: 0.7954 | g_loss: 2.0056
Epoch [ 13/ 25] | d_loss: 0.7436 | g_loss: 1.4472
Epoch [ 13/ 25] | d_loss: 0.8174 | g_loss: 1.4687
Epoch [ 13/ 25] | d_loss: 0.6527 | g_loss: 1.6569
Epoch [ 13/ 25] | d_loss: 0.8044 | g_loss: 2.9920
Epoch [ 13/ 25] | d_loss: 0.6298 | g_loss: 2.7744
Epoch [ 13/ 25] | d_loss: 0.8093 | g_loss: 1.4024
Epoch [ 13/ 25] | d_loss: 0.8628 | g_loss: 1.4374
Epoch [ 14/ 25] | d_loss: 0.9131 | g_loss: 3.2003
Epoch [ 14/ 25] | d_loss: 0.6452 | g_loss: 2.0480
Epoch [ 14/ 25] | d_loss: 0.6424 | g_loss: 2.6575
Epoch [ 14/ 25] | d_loss: 0.7777 | g_loss: 1.4160
Epoch [ 14/ 25] | d_loss: 0.7086 | g_loss: 1.9615
Epoch [ 14/ 25] | d_loss: 0.6632 | g_loss: 2.4103
Epoch [ 14/ 25] | d_loss: 0.6513 | g_loss: 2.2625
Epoch [ 14/ 25] | d_loss: 0.6381 | g_loss: 1.8410
Epoch [ 14/ 25] | d_loss: 0.7202 | g_loss: 2.1681
Epoch [ 14/ 25] | d_loss: 0.9660 | g_loss: 1.1245
Epoch [ 14/ 25] | d_loss: 0.7607 | g_loss: 2.5233
Epoch [ 14/ 25] | d_loss: 0.7412 | g_loss: 2.2701
Epoch [ 14/ 25] | d_loss: 0.7460 | g_loss: 2.2914
Epoch [ 14/ 25] | d_loss: 0.8056 | g_loss: 2.4824
Epoch [ 14/ 25] | d_loss: 0.7354 | g_loss: 2.4306
Epoch [ 15/ 25] | d_loss: 0.5892 | g_loss: 2.5557
Epoch [ 15/ 25] | d_loss: 0.6279 | g_loss: 2.6464
Epoch [ 15/ 25] | d_loss: 0.7249 | g_loss: 2.1179
Epoch [ 15/ 25] | d_loss: 0.6390 | g_loss: 2.3858
Epoch [ 15/ 25] | d_loss: 0.6579 | g_loss: 2.2306
Epoch [ 15/ 25] | d_loss: 1.0842 | g_loss: 1.2037
Epoch [ 15/ 25] | d_loss: 0.9114 | g_loss: 2.9547
Epoch [ 15/ 25] | d_loss: 0.7331 | g_loss: 2.9662
Epoch [ 15/ 25] | d_loss: 0.8278 | g_loss: 1.9508
Epoch [ 15/ 25] | d_loss: 0.7492 | g_loss: 2.8276
Epoch [ 15/ 25] | d_loss: 0.6170 | g_loss: 2.4985
Epoch [ 15/ 25] | d_loss: 0.8953 | g_loss: 1.9476
Epoch [ 15/ 25] | d_loss: 0.6698 | g_loss: 1.2033
Epoch [ 15/ 25] | d_loss: 0.7104 | g_loss: 2.7721
Epoch [ 15/ 25] | d_loss: 0.9230 | g_loss: 2.7187
Epoch [ 16/ 25] | d_loss: 0.5723 | g_loss: 2.9356
Epoch [ 16/ 25] | d_loss: 1.1056 | g_loss: 0.9318
Epoch [ 16/ 25] | d_loss: 0.6511 | g_loss: 1.8300
Epoch [ 16/ 25] | d_loss: 0.7310 | g_loss: 1.8740
Epoch [ 16/ 25] | d_loss: 0.7552 | g_loss: 2.4169
Epoch [ 16/ 25] | d_loss: 0.6300 | g_loss: 2.4811
Epoch [ 16/ 25] | d_loss: 0.5995 | g_loss: 2.0758
Epoch [ 16/ 25] | d_loss: 0.9247 | g_loss: 3.7270
Epoch [ 16/ 25] | d_loss: 1.0929 | g_loss: 3.1147
Epoch [ 16/ 25] | d_loss: 0.8298 | g_loss: 2.6633
Epoch [ 16/ 25] | d_loss: 0.7210 | g_loss: 2.5472
Epoch [ 16/ 25] | d_loss: 0.6271 | g_loss: 2.2929
Epoch [ 16/ 25] | d_loss: 0.9341 | g_loss: 1.6242
Epoch [ 16/ 25] | d_loss: 1.0475 | g_loss: 3.5788
Epoch [ 16/ 25] | d_loss: 0.9235 | g_loss: 1.5206
Epoch [ 17/ 25] | d_loss: 0.8116 | g_loss: 1.7170
Epoch [ 17/ 25] | d_loss: 1.0637 | g_loss: 1.7064
Epoch [ 17/ 25] | d_loss: 0.6857 | g_loss: 1.8638
Epoch [ 17/ 25] | d_loss: 0.5875 | g_loss: 3.0453
Epoch [ 17/ 25] | d_loss: 0.6491 | g_loss: 1.7220
Epoch [ 17/ 25] | d_loss: 0.7901 | g_loss: 1.5619
Epoch [ 17/ 25] | d_loss: 1.0544 | g_loss: 1.4008
Epoch [ 17/ 25] | d_loss: 0.8486 | g_loss: 2.4180
Epoch [ 17/ 25] | d_loss: 0.8175 | g_loss: 2.7949
Epoch [ 17/ 25] | d_loss: 0.7227 | g_loss: 1.9185
Epoch [ 17/ 25] | d_loss: 0.6059 | g_loss: 2.3817
Epoch [ 17/ 25] | d_loss: 0.6582 | g_loss: 1.7483
Epoch [ 17/ 25] | d_loss: 0.6808 | g_loss: 1.6440
Epoch [ 17/ 25] | d_loss: 0.6173 | g_loss: 2.0243
Epoch [ 17/ 25] | d_loss: 0.6327 | g_loss: 2.3791
Epoch [ 18/ 25] | d_loss: 0.5813 | g_loss: 2.5597
Epoch [ 18/ 25] | d_loss: 0.6807 | g_loss: 2.1032
Epoch [ 18/ 25] | d_loss: 0.5952 | g_loss: 2.8021
Epoch [ 18/ 25] | d_loss: 0.6324 | g_loss: 3.0107
Epoch [ 18/ 25] | d_loss: 0.5441 | g_loss: 2.5074
Epoch [ 18/ 25] | d_loss: 0.8819 | g_loss: 2.3290
Epoch [ 18/ 25] | d_loss: 0.5534 | g_loss: 1.8430
Epoch [ 18/ 25] | d_loss: 0.6026 | g_loss: 2.3885
Epoch [ 18/ 25] | d_loss: 0.7310 | g_loss: 1.5835
Epoch [ 18/ 25] | d_loss: 0.6143 | g_loss: 1.4992
Epoch [ 18/ 25] | d_loss: 0.7061 | g_loss: 1.9610
Epoch [ 18/ 25] | d_loss: 0.5872 | g_loss: 2.1990
Epoch [ 18/ 25] | d_loss: 0.6938 | g_loss: 3.0768
Epoch [ 18/ 25] | d_loss: 0.7772 | g_loss: 2.7589
Epoch [ 18/ 25] | d_loss: 0.5391 | g_loss: 3.3558
Epoch [ 19/ 25] | d_loss: 0.7279 | g_loss: 2.1980
Epoch [ 19/ 25] | d_loss: 0.7260 | g_loss: 2.8323
Epoch [ 19/ 25] | d_loss: 0.7138 | g_loss: 2.7383
Epoch [ 19/ 25] | d_loss: 0.6436 | g_loss: 1.8465
Epoch [ 19/ 25] | d_loss: 0.9059 | g_loss: 1.2928
Epoch [ 19/ 25] | d_loss: 0.5729 | g_loss: 2.3503
Epoch [ 19/ 25] | d_loss: 0.7200 | g_loss: 3.3834
Epoch [ 19/ 25] | d_loss: 1.0335 | g_loss: 4.3221
Epoch [ 19/ 25] | d_loss: 0.6342 | g_loss: 2.7204
Epoch [ 19/ 25] | d_loss: 0.5617 | g_loss: 2.2968
Epoch [ 19/ 25] | d_loss: 0.6563 | g_loss: 1.8908
Epoch [ 19/ 25] | d_loss: 0.7437 | g_loss: 2.6382
Epoch [ 19/ 25] | d_loss: 0.5805 | g_loss: 2.5522
Epoch [ 19/ 25] | d_loss: 0.5925 | g_loss: 2.4891
Epoch [ 19/ 25] | d_loss: 0.7250 | g_loss: 1.5648
Epoch [ 20/ 25] | d_loss: 0.6420 | g_loss: 2.3742
Epoch [ 20/ 25] | d_loss: 1.0232 | g_loss: 1.1878
Epoch [ 20/ 25] | d_loss: 0.4774 | g_loss: 2.3556
Epoch [ 20/ 25] | d_loss: 0.8697 | g_loss: 2.5122
Epoch [ 20/ 25] | d_loss: 0.6900 | g_loss: 1.5611
Epoch [ 20/ 25] | d_loss: 0.8186 | g_loss: 1.2991
Epoch [ 20/ 25] | d_loss: 0.5571 | g_loss: 3.4203
Epoch [ 20/ 25] | d_loss: 0.5596 | g_loss: 2.0367
Epoch [ 20/ 25] | d_loss: 0.5807 | g_loss: 2.7988
Epoch [ 20/ 25] | d_loss: 0.5492 | g_loss: 2.5388
Epoch [ 20/ 25] | d_loss: 0.5936 | g_loss: 1.8029
Epoch [ 20/ 25] | d_loss: 0.5560 | g_loss: 3.0913
Epoch [ 20/ 25] | d_loss: 0.7925 | g_loss: 2.5335
Epoch [ 20/ 25] | d_loss: 0.5634 | g_loss: 1.8130
Epoch [ 20/ 25] | d_loss: 0.6760 | g_loss: 1.9668
Epoch [ 21/ 25] | d_loss: 0.6020 | g_loss: 1.9489
Epoch [ 21/ 25] | d_loss: 0.8360 | g_loss: 3.5219
Epoch [ 21/ 25] | d_loss: 0.6413 | g_loss: 1.7506
Epoch [ 21/ 25] | d_loss: 0.5732 | g_loss: 1.8128
Epoch [ 21/ 25] | d_loss: 0.8504 | g_loss: 3.2524
Epoch [ 21/ 25] | d_loss: 0.6234 | g_loss: 2.4820
Epoch [ 21/ 25] | d_loss: 0.5803 | g_loss: 2.2766
Epoch [ 21/ 25] | d_loss: 0.5351 | g_loss: 2.4969
Epoch [ 21/ 25] | d_loss: 0.5752 | g_loss: 3.3823
Epoch [ 21/ 25] | d_loss: 0.5902 | g_loss: 2.8928
Epoch [ 21/ 25] | d_loss: 0.6489 | g_loss: 2.1526
Epoch [ 21/ 25] | d_loss: 1.0289 | g_loss: 1.6196
Epoch [ 21/ 25] | d_loss: 0.8383 | g_loss: 2.9957
Epoch [ 21/ 25] | d_loss: 0.4473 | g_loss: 3.1153
Epoch [ 21/ 25] | d_loss: 0.5814 | g_loss: 2.1815
Epoch [ 22/ 25] | d_loss: 0.5337 | g_loss: 2.3766
Epoch [ 22/ 25] | d_loss: 0.5823 | g_loss: 3.3917
Epoch [ 22/ 25] | d_loss: 0.4975 | g_loss: 2.8688
Epoch [ 22/ 25] | d_loss: 0.5667 | g_loss: 2.2332
Epoch [ 22/ 25] | d_loss: 0.5918 | g_loss: 2.4437
Epoch [ 22/ 25] | d_loss: 0.5653 | g_loss: 2.6833
Epoch [ 22/ 25] | d_loss: 0.6625 | g_loss: 1.9285
Epoch [ 22/ 25] | d_loss: 0.5485 | g_loss: 2.0270
Epoch [ 22/ 25] | d_loss: 0.5581 | g_loss: 2.9477
Epoch [ 22/ 25] | d_loss: 0.4838 | g_loss: 2.4767
Epoch [ 22/ 25] | d_loss: 0.4925 | g_loss: 2.2530
Epoch [ 22/ 25] | d_loss: 0.5545 | g_loss: 3.1299
Epoch [ 22/ 25] | d_loss: 0.4981 | g_loss: 2.9623
Epoch [ 22/ 25] | d_loss: 0.5283 | g_loss: 2.8258
Epoch [ 22/ 25] | d_loss: 0.7448 | g_loss: 1.4504
Epoch [ 23/ 25] | d_loss: 0.5868 | g_loss: 2.1154
Epoch [ 23/ 25] | d_loss: 0.5045 | g_loss: 2.5184
Epoch [ 23/ 25] | d_loss: 0.6185 | g_loss: 3.7213
Epoch [ 23/ 25] | d_loss: 0.6696 | g_loss: 3.2826
Epoch [ 23/ 25] | d_loss: 0.5257 | g_loss: 2.7834
Epoch [ 23/ 25] | d_loss: 0.5583 | g_loss: 2.1177
Epoch [ 23/ 25] | d_loss: 0.5166 | g_loss: 2.2706
Epoch [ 23/ 25] | d_loss: 0.5597 | g_loss: 2.8573
Epoch [ 23/ 25] | d_loss: 0.5176 | g_loss: 2.4491
Epoch [ 23/ 25] | d_loss: 0.6853 | g_loss: 2.7030
Epoch [ 23/ 25] | d_loss: 0.6780 | g_loss: 2.9797
Epoch [ 23/ 25] | d_loss: 0.5221 | g_loss: 2.4515
Epoch [ 23/ 25] | d_loss: 0.6667 | g_loss: 1.8132
Epoch [ 23/ 25] | d_loss: 0.5824 | g_loss: 2.7083
Epoch [ 23/ 25] | d_loss: 0.4888 | g_loss: 2.6859
Epoch [ 24/ 25] | d_loss: 0.5003 | g_loss: 2.4313
Epoch [ 24/ 25] | d_loss: 0.8690 | g_loss: 1.5153
Epoch [ 24/ 25] | d_loss: 0.6305 | g_loss: 3.7815
Epoch [ 24/ 25] | d_loss: 1.0290 | g_loss: 1.6077
Epoch [ 24/ 25] | d_loss: 0.5436 | g_loss: 2.0114
Epoch [ 24/ 25] | d_loss: 0.5932 | g_loss: 2.3361
Epoch [ 24/ 25] | d_loss: 0.5297 | g_loss: 2.1278
Epoch [ 24/ 25] | d_loss: 0.8083 | g_loss: 2.2301
Epoch [ 24/ 25] | d_loss: 0.5450 | g_loss: 2.3696
Epoch [ 24/ 25] | d_loss: 0.6167 | g_loss: 2.7004
Epoch [ 24/ 25] | d_loss: 0.5494 | g_loss: 2.7079
Epoch [ 24/ 25] | d_loss: 0.6322 | g_loss: 2.2285
Epoch [ 24/ 25] | d_loss: 0.5941 | g_loss: 2.4369
Epoch [ 24/ 25] | d_loss: 0.7240 | g_loss: 1.7226
Epoch [ 24/ 25] | d_loss: 0.4805 | g_loss: 3.3683
Epoch [ 25/ 25] | d_loss: 0.7356 | g_loss: 1.7634
Epoch [ 25/ 25] | d_loss: 1.2255 | g_loss: 1.9966
Epoch [ 25/ 25] | d_loss: 0.6178 | g_loss: 2.6739
Epoch [ 25/ 25] | d_loss: 0.5577 | g_loss: 2.7433
Epoch [ 25/ 25] | d_loss: 1.0802 | g_loss: 4.0644
Epoch [ 25/ 25] | d_loss: 0.6232 | g_loss: 2.9068
Epoch [ 25/ 25] | d_loss: 0.5621 | g_loss: 2.2857
Epoch [ 25/ 25] | d_loss: 0.5267 | g_loss: 3.0929
Epoch [ 25/ 25] | d_loss: 0.5868 | g_loss: 2.3893
Epoch [ 25/ 25] | d_loss: 0.7716 | g_loss: 1.1240
Epoch [ 25/ 25] | d_loss: 0.5740 | g_loss: 2.9427
Epoch [ 25/ 25] | d_loss: 0.5845 | g_loss: 2.2388
Epoch [ 25/ 25] | d_loss: 0.6037 | g_loss: 2.3880
Epoch [ 25/ 25] | d_loss: 0.6393 | g_loss: 2.7048
Epoch [ 25/ 25] | d_loss: 0.5516 | g_loss: 2.3711
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____
###Markdown
Face GenerationIn this project, you'll use generative adversarial networks to generate new images of faces. Get the DataYou'll be using two datasets in this project:- MNIST- CelebASince the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.If you're using [FloydHub](https://www.floydhub.com/), set `data_dir` to "/input" and use the [FloydHub data ID](http://docs.floydhub.com/home/using_datasets/) "R5KrjnANiKVhLWAkpXhNBe".
###Code
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
###Output
_____no_output_____
###Markdown
Explore the Data MNISTAs you're aware, the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset contains images of handwritten digits. You can view the first number of examples by changing `show_n_images`.
###Code
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
###Output
_____no_output_____
###Markdown
CelebAThe [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing `show_n_images`.
###Code
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
###Output
_____no_output_____
###Markdown
Preprocess the DataSince the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29RGB_Images). Build the Neural NetworkYou'll build the components necessary to build a GANs by implementing the following functions below:- `model_inputs`- `discriminator`- `generator`- `model_loss`- `model_opt`- `train` Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
InputImplement the `model_inputs` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Real input images placeholder with rank 4 using `image_width`, `image_height`, and `image_channels`.- Z input placeholder with rank 2 using `z_dim`.- Learning rate placeholder with rank 0.Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
###Code
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
###Output
_____no_output_____
###Markdown
DiscriminatorImplement `discriminator` to create a discriminator neural network that discriminates on `images`. This function should be able to reuse the variabes in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
###Code
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
###Output
_____no_output_____
###Markdown
GeneratorImplement `generator` to generate an image using `z`. This function should be able to reuse the variabes in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x `out_channel_dim` images.
###Code
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
###Output
_____no_output_____
###Markdown
LossImplement `model_loss` to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:- `discriminator(images, reuse=False)`- `generator(z, out_channel_dim, is_train=True)`
###Code
def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
###Output
_____no_output_____
###Markdown
OptimizationImplement `model_opt` to create the optimization operations for the GANs. Use [`tf.trainable_variables`](https://www.tensorflow.org/api_docs/python/tf/trainable_variables) to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
###Code
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
###Output
_____no_output_____
###Markdown
Neural Network Training Show OutputUse this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
###Output
_____no_output_____
###Markdown
TrainImplement `train` to build and train the GANs. Use the following functions you implemented:- `model_inputs(image_width, image_height, image_channels, z_dim)`- `model_loss(input_real, input_z, out_channel_dim)`- `model_opt(d_loss, g_loss, learning_rate, beta1)`Use the `show_generator_output` to show `generator` output while you train. Running `show_generator_output` for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the `generator` output every 100 batches.
###Code
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
###Output
_____no_output_____
###Markdown
MNISTTest your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
###Code
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
###Output
_____no_output_____
###Markdown
CelebARun your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
###Code
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
###Output
_____no_output_____
###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
#!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
transform = transforms.Compose([transforms.Resize(image_size),
transforms.ToTensor()])
image_dataset = datasets.ImageFolder(data_dir, transform)
return torch.utils.data.DataLoader(image_dataset, batch_size = batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters
batch_size = 128
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
return x * (max - min) + min
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
Min: tensor(-0.9922)
Max: tensor(0.2784)
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a convolutional layer, with optional batch normalization.
"""
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # (16, 16, conv_dim)
self.conv2 = conv(conv_dim, conv_dim*2, 4) # (8, 8, conv_dim*2)
self.conv3 = conv(conv_dim*2, conv_dim*4, 4) # (4, 4, conv_dim*4)
self.conv4 = conv(conv_dim*4, conv_dim*8, 4) # (2, 2, conv_dim*8)
self.classifier = nn.Linear(conv_dim*8*2*2, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
out = F.leaky_relu(self.conv1(x), 0.2)
out = F.leaky_relu(self.conv2(out), 0.2)
out = F.leaky_relu(self.conv3(out), 0.2)
out = F.leaky_relu(self.conv4(out), 0.2)
out = out.view(-1, self.conv_dim*8*2*2)
out = self.classifier(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
Tests Passed
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a transposed-convolutional layer, with optional batch normalization.
"""
# create a sequence of transpose + optional batch norm layers
layers = []
transpose_conv_layer = nn.ConvTranspose2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append transpose convolutional layer
layers.append(transpose_conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.fc = nn.Linear(z_size, conv_dim*8*2*2)
self.t_conv1 = deconv(conv_dim*8, conv_dim*4, 4)
self.t_conv2 = deconv(conv_dim*4, conv_dim*2, 4)
self.t_conv3 = deconv(conv_dim*2, conv_dim, 4)
self.t_conv4 = deconv(conv_dim, 3, 4, batch_norm=False)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
out = self.fc(x)
out = out.view(-1, self.conv_dim*8, 2, 2) # (batch_size, depth, 4, 4)
out = F.relu(self.t_conv1(out))
out = F.relu(self.t_conv2(out))
out = F.relu(self.t_conv3(out))
# last layer: tanh activation instead of relu
out = self.t_conv4(out)
out = F.tanh(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
###Output
Tests Passed
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim = 64
g_conv_dim = 64
z_size = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
Discriminator(
(conv1): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
(conv2): Sequential(
(0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv3): Sequential(
(0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv4): Sequential(
(0): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(classifier): Linear(in_features=2048, out_features=1, bias=True)
)
Generator(
(fc): Linear(in_features=100, out_features=2048, bias=True)
(t_conv1): Sequential(
(0): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv2): Sequential(
(0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv3): Sequential(
(0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv4): Sequential(
(0): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
)
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
Training on GPU!
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out, smooth=False):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
batch_size = D_out.size(0)
# label smoothing
if smooth:
# smooth, real labels = 0.9
labels = torch.ones(batch_size)*0.9
else:
labels = torch.ones(batch_size) # real labels = 1
# move labels to GPU if available
if train_on_gpu:
labels = labels.cuda()
# binary cross entropy with logits loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
lr = 0.0002
beta1=0.5
beta2=0.999
# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
# Compute the discriminator losses on real images
d_optimizer.zero_grad()
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
z = np.random.uniform(-1, 1, size = (batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
z = np.random.uniform(-1, 1, size = (batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs = 7
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
Epoch [ 1/ 7] | d_loss: 1.3437 | g_loss: 1.4623
Epoch [ 1/ 7] | d_loss: 0.1808 | g_loss: 4.6017
Epoch [ 1/ 7] | d_loss: 0.4086 | g_loss: 2.4095
Epoch [ 1/ 7] | d_loss: 0.3046 | g_loss: 2.9941
Epoch [ 1/ 7] | d_loss: 0.7858 | g_loss: 3.8346
Epoch [ 1/ 7] | d_loss: 0.8137 | g_loss: 5.4936
Epoch [ 1/ 7] | d_loss: 0.5189 | g_loss: 3.3199
Epoch [ 1/ 7] | d_loss: 0.5475 | g_loss: 3.6674
Epoch [ 1/ 7] | d_loss: 0.5942 | g_loss: 4.0014
Epoch [ 1/ 7] | d_loss: 0.4961 | g_loss: 3.3523
Epoch [ 1/ 7] | d_loss: 0.8930 | g_loss: 1.9572
Epoch [ 1/ 7] | d_loss: 0.6378 | g_loss: 2.5960
Epoch [ 1/ 7] | d_loss: 0.7803 | g_loss: 2.2604
Epoch [ 1/ 7] | d_loss: 0.5584 | g_loss: 3.4155
Epoch [ 1/ 7] | d_loss: 1.0980 | g_loss: 4.7654
Epoch [ 2/ 7] | d_loss: 0.9456 | g_loss: 1.6985
Epoch [ 2/ 7] | d_loss: 0.6704 | g_loss: 2.2194
Epoch [ 2/ 7] | d_loss: 0.7206 | g_loss: 2.7670
Epoch [ 2/ 7] | d_loss: 0.8976 | g_loss: 2.2704
Epoch [ 2/ 7] | d_loss: 0.8094 | g_loss: 1.9635
Epoch [ 2/ 7] | d_loss: 0.7082 | g_loss: 2.8806
Epoch [ 2/ 7] | d_loss: 0.7393 | g_loss: 2.3599
Epoch [ 2/ 7] | d_loss: 0.7140 | g_loss: 2.2049
Epoch [ 2/ 7] | d_loss: 0.8294 | g_loss: 1.8843
Epoch [ 2/ 7] | d_loss: 0.6283 | g_loss: 2.1773
Epoch [ 2/ 7] | d_loss: 1.1451 | g_loss: 3.5512
Epoch [ 2/ 7] | d_loss: 0.7456 | g_loss: 2.1122
Epoch [ 2/ 7] | d_loss: 0.5539 | g_loss: 2.1701
Epoch [ 2/ 7] | d_loss: 0.6095 | g_loss: 1.7768
Epoch [ 2/ 7] | d_loss: 0.5747 | g_loss: 2.1822
Epoch [ 3/ 7] | d_loss: 0.6626 | g_loss: 3.1909
Epoch [ 3/ 7] | d_loss: 0.8303 | g_loss: 2.0593
Epoch [ 3/ 7] | d_loss: 1.2030 | g_loss: 1.3203
Epoch [ 3/ 7] | d_loss: 0.8736 | g_loss: 3.0381
Epoch [ 3/ 7] | d_loss: 0.9104 | g_loss: 1.4243
Epoch [ 3/ 7] | d_loss: 0.7584 | g_loss: 1.8666
Epoch [ 3/ 7] | d_loss: 0.7212 | g_loss: 2.4640
Epoch [ 3/ 7] | d_loss: 0.9704 | g_loss: 1.0351
Epoch [ 3/ 7] | d_loss: 0.5350 | g_loss: 1.7317
Epoch [ 3/ 7] | d_loss: 0.9625 | g_loss: 1.0945
Epoch [ 3/ 7] | d_loss: 0.7676 | g_loss: 3.4429
Epoch [ 3/ 7] | d_loss: 0.5448 | g_loss: 1.9633
Epoch [ 3/ 7] | d_loss: 0.5370 | g_loss: 2.4156
Epoch [ 3/ 7] | d_loss: 0.6661 | g_loss: 2.9805
Epoch [ 3/ 7] | d_loss: 0.7960 | g_loss: 1.9922
Epoch [ 4/ 7] | d_loss: 0.6964 | g_loss: 2.8330
Epoch [ 4/ 7] | d_loss: 0.6284 | g_loss: 1.7921
Epoch [ 4/ 7] | d_loss: 0.9005 | g_loss: 1.5653
Epoch [ 4/ 7] | d_loss: 0.6507 | g_loss: 1.9150
Epoch [ 4/ 7] | d_loss: 0.6436 | g_loss: 2.9485
Epoch [ 4/ 7] | d_loss: 0.6977 | g_loss: 2.4649
Epoch [ 4/ 7] | d_loss: 0.8703 | g_loss: 2.9338
Epoch [ 4/ 7] | d_loss: 0.6558 | g_loss: 2.2860
Epoch [ 4/ 7] | d_loss: 0.6958 | g_loss: 2.2735
Epoch [ 4/ 7] | d_loss: 0.8142 | g_loss: 1.8117
Epoch [ 4/ 7] | d_loss: 0.6945 | g_loss: 1.6126
Epoch [ 4/ 7] | d_loss: 0.5048 | g_loss: 2.3966
Epoch [ 4/ 7] | d_loss: 0.6856 | g_loss: 2.0644
Epoch [ 4/ 7] | d_loss: 0.7057 | g_loss: 2.5178
Epoch [ 4/ 7] | d_loss: 0.9238 | g_loss: 3.4298
Epoch [ 5/ 7] | d_loss: 0.6955 | g_loss: 2.2265
Epoch [ 5/ 7] | d_loss: 0.5745 | g_loss: 2.2352
Epoch [ 5/ 7] | d_loss: 0.5185 | g_loss: 2.3851
Epoch [ 5/ 7] | d_loss: 0.7439 | g_loss: 1.5938
Epoch [ 5/ 7] | d_loss: 0.7433 | g_loss: 2.6999
Epoch [ 5/ 7] | d_loss: 0.7229 | g_loss: 2.0454
Epoch [ 5/ 7] | d_loss: 0.8959 | g_loss: 3.6700
Epoch [ 5/ 7] | d_loss: 0.5602 | g_loss: 2.5104
Epoch [ 5/ 7] | d_loss: 1.1643 | g_loss: 3.7888
Epoch [ 5/ 7] | d_loss: 1.0615 | g_loss: 3.7661
Epoch [ 5/ 7] | d_loss: 0.4750 | g_loss: 2.7299
Epoch [ 6/ 7] | d_loss: 0.9757 | g_loss: 1.3070
Epoch [ 6/ 7] | d_loss: 0.6079 | g_loss: 2.2403
Epoch [ 6/ 7] | d_loss: 0.7654 | g_loss: 1.4727
Epoch [ 6/ 7] | d_loss: 0.9962 | g_loss: 1.3832
Epoch [ 6/ 7] | d_loss: 0.6680 | g_loss: 1.7448
Epoch [ 6/ 7] | d_loss: 0.8952 | g_loss: 1.4150
Epoch [ 6/ 7] | d_loss: 1.0704 | g_loss: 0.8890
Epoch [ 6/ 7] | d_loss: 0.5539 | g_loss: 2.3630
Epoch [ 6/ 7] | d_loss: 0.8051 | g_loss: 2.2881
Epoch [ 6/ 7] | d_loss: 0.5421 | g_loss: 2.5312
Epoch [ 6/ 7] | d_loss: 0.7046 | g_loss: 2.3963
Epoch [ 6/ 7] | d_loss: 0.7867 | g_loss: 2.6125
Epoch [ 7/ 7] | d_loss: 0.8585 | g_loss: 3.1598
Epoch [ 7/ 7] | d_loss: 0.6355 | g_loss: 1.8086
Epoch [ 7/ 7] | d_loss: 0.5664 | g_loss: 2.2420
Epoch [ 7/ 7] | d_loss: 0.4735 | g_loss: 3.4055
Epoch [ 7/ 7] | d_loss: 0.5287 | g_loss: 1.8008
Epoch [ 7/ 7] | d_loss: 0.5850 | g_loss: 1.8114
Epoch [ 7/ 7] | d_loss: 0.6771 | g_loss: 1.8029
Epoch [ 7/ 7] | d_loss: 0.4845 | g_loss: 1.7537
Epoch [ 7/ 7] | d_loss: 0.6917 | g_loss: 1.5803
Epoch [ 7/ 7] | d_loss: 0.6641 | g_loss: 2.0089
Epoch [ 7/ 7] | d_loss: 0.5704 | g_loss: 2.0654
Epoch [ 7/ 7] | d_loss: 1.0926 | g_loss: 1.7776
Epoch [ 7/ 7] | d_loss: 0.6590 | g_loss: 1.4882
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____
###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
transform = transforms.Compose([transforms.Resize(image_size),
transforms.ToTensor()])
train_data = datasets.ImageFolder(data_dir, transform=transform)
return torch.utils.data.DataLoader(train_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters
batch_size = 128
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
low, high = feature_range
x = x*(high-low)+low
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
Min: tensor(-0.9529)
Max: tensor(0.9451)
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
#ill be making this architecture:
#https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/b82f18222e46c27138fa146e4f5fa8b1bd046dc6/dcgan-svhn/assets/conv_discriminator.png
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
self.conv_dim = conv_dim
# complete init function
self.conv1 = nn.Conv2d(3, conv_dim, 4, stride=2, padding=1)
# sees 16x16x..
self.conv2 = nn.Conv2d(conv_dim, conv_dim*2, 4, stride=2, padding=1)
# sees 8x8x..
self.batch1 = nn.BatchNorm2d(conv_dim*2)
self.conv3 = nn.Conv2d(conv_dim*2, conv_dim*4, 4, stride=2, padding=1)
self.batch2 = nn.BatchNorm2d(conv_dim*4)
# sees 4x4x..
self.fc = nn.Linear(4*4*conv_dim*4, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
x = F.leaky_relu(self.conv1(x), 0.2)
x = self.batch1(F.leaky_relu(self.conv2(x), 0.2))
x = self.batch2(F.leaky_relu(self.conv3(x), 0.2))
x = x.view(-1, 4*4*self.conv_dim*4)
x = self.fc(x)
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
Tests Passed
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
#ill be making this network
#https://github.com/udacity/deep-learning-v2-pytorch/raw/b82f18222e46c27138fa146e4f5fa8b1bd046dc6/dcgan-svhn/assets/conv_generator.png
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.z_size = z_size
self.conv_dim = conv_dim
self.fc = nn.Linear(z_size, 4*4*conv_dim*4)
self.conv1 = nn.ConvTranspose2d(conv_dim*4, conv_dim*2, 4, stride=2, padding=1, bias=False)
self.batch1 = nn.BatchNorm2d(conv_dim*2)
self.conv2 = nn.ConvTranspose2d(conv_dim*2, conv_dim, 4, stride=2, padding=1, bias=False)
self.batch2 = nn.BatchNorm2d(conv_dim)
self.conv3 = nn.ConvTranspose2d(conv_dim, 3, 4, stride=2, padding=1, bias=False)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
x = self.fc(x)
x = x.view(-1, self.conv_dim*4, 4, 4) # (batch_size, depth, 4, 4)
x = self.batch1(F.relu(self.conv1(x)))
x = self.batch2(F.relu(self.conv2(x)))
x = F.tanh(self.conv3(x))
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
###Output
Tests Passed
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
from torch.nn import init
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
init.normal_(m.weight.data, 0.0, 0.02)
if hasattr(m, 'bias') and m.bias is not None:
init.constant_(m.bias.data, 0.0)
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
Discriminator(
(conv1): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(conv2): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(batch1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(batch2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(fc): Linear(in_features=2048, out_features=1, bias=True)
)
Generator(
(fc): Linear(in_features=100, out_features=2048, bias=True)
(conv1): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(batch1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(batch2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
Training on GPU!
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
labels = torch.ones_like(D_out.squeeze())
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
labels = torch.zeros_like(D_out.squeeze())
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
#following https://arxiv.org/pdf/1511.06434.pdf
# params
learning_rate = 0.0002
beta_1 = 0.5
beta_2 = 0.999
# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), learning_rate, [beta_1, beta_2])
g_optimizer = optim.Adam(G.parameters(), learning_rate, [beta_1, beta_2])
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_optimizer.zero_grad()
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
Epoch [ 1/ 20] | d_loss: 1.7427 | g_loss: 0.5970
Epoch [ 1/ 20] | d_loss: 0.0740 | g_loss: 3.6904
Epoch [ 1/ 20] | d_loss: 0.0449 | g_loss: 4.2517
Epoch [ 1/ 20] | d_loss: 0.1251 | g_loss: 3.5460
Epoch [ 1/ 20] | d_loss: 0.5474 | g_loss: 2.5696
Epoch [ 1/ 20] | d_loss: 0.7517 | g_loss: 1.5036
Epoch [ 1/ 20] | d_loss: 0.8189 | g_loss: 1.2194
Epoch [ 1/ 20] | d_loss: 0.7837 | g_loss: 1.9027
Epoch [ 1/ 20] | d_loss: 0.9164 | g_loss: 1.1374
Epoch [ 1/ 20] | d_loss: 0.8254 | g_loss: 1.7522
Epoch [ 1/ 20] | d_loss: 1.0073 | g_loss: 1.3173
Epoch [ 1/ 20] | d_loss: 1.1840 | g_loss: 2.1046
Epoch [ 1/ 20] | d_loss: 1.0700 | g_loss: 0.7278
Epoch [ 1/ 20] | d_loss: 0.9186 | g_loss: 1.5632
Epoch [ 1/ 20] | d_loss: 0.9053 | g_loss: 1.1757
Epoch [ 2/ 20] | d_loss: 1.0057 | g_loss: 0.8216
Epoch [ 2/ 20] | d_loss: 0.9260 | g_loss: 1.2068
Epoch [ 2/ 20] | d_loss: 1.0479 | g_loss: 1.5992
Epoch [ 2/ 20] | d_loss: 1.1101 | g_loss: 0.9789
Epoch [ 2/ 20] | d_loss: 1.1327 | g_loss: 1.0723
Epoch [ 2/ 20] | d_loss: 1.2674 | g_loss: 0.4947
Epoch [ 2/ 20] | d_loss: 1.3174 | g_loss: 1.0939
Epoch [ 2/ 20] | d_loss: 0.7901 | g_loss: 1.2068
Epoch [ 2/ 20] | d_loss: 1.0003 | g_loss: 1.1316
Epoch [ 2/ 20] | d_loss: 1.1252 | g_loss: 2.1196
Epoch [ 2/ 20] | d_loss: 1.2768 | g_loss: 2.0847
Epoch [ 2/ 20] | d_loss: 1.2234 | g_loss: 0.9241
Epoch [ 2/ 20] | d_loss: 1.2168 | g_loss: 0.9574
Epoch [ 2/ 20] | d_loss: 0.9800 | g_loss: 1.1470
Epoch [ 2/ 20] | d_loss: 0.9704 | g_loss: 1.0632
Epoch [ 3/ 20] | d_loss: 1.2513 | g_loss: 1.8542
Epoch [ 3/ 20] | d_loss: 0.8739 | g_loss: 1.4763
Epoch [ 3/ 20] | d_loss: 0.9837 | g_loss: 2.2136
Epoch [ 3/ 20] | d_loss: 1.2459 | g_loss: 1.4557
Epoch [ 3/ 20] | d_loss: 1.1252 | g_loss: 1.6854
Epoch [ 3/ 20] | d_loss: 1.0313 | g_loss: 0.8557
Epoch [ 3/ 20] | d_loss: 1.0825 | g_loss: 1.0836
Epoch [ 3/ 20] | d_loss: 1.1981 | g_loss: 1.8777
Epoch [ 3/ 20] | d_loss: 1.0294 | g_loss: 1.4330
Epoch [ 3/ 20] | d_loss: 1.1590 | g_loss: 1.5991
Epoch [ 3/ 20] | d_loss: 1.0552 | g_loss: 1.2931
Epoch [ 3/ 20] | d_loss: 0.9492 | g_loss: 1.1740
Epoch [ 3/ 20] | d_loss: 0.9858 | g_loss: 1.0084
Epoch [ 3/ 20] | d_loss: 1.0645 | g_loss: 1.4179
Epoch [ 3/ 20] | d_loss: 0.9441 | g_loss: 1.0812
Epoch [ 4/ 20] | d_loss: 1.2621 | g_loss: 1.5526
Epoch [ 4/ 20] | d_loss: 1.0309 | g_loss: 1.6425
Epoch [ 4/ 20] | d_loss: 1.1003 | g_loss: 0.8838
Epoch [ 4/ 20] | d_loss: 1.0158 | g_loss: 1.3945
Epoch [ 4/ 20] | d_loss: 1.1909 | g_loss: 0.6678
Epoch [ 4/ 20] | d_loss: 0.9112 | g_loss: 0.9705
Epoch [ 4/ 20] | d_loss: 1.1023 | g_loss: 1.3225
Epoch [ 4/ 20] | d_loss: 0.9295 | g_loss: 1.3371
Epoch [ 4/ 20] | d_loss: 1.3349 | g_loss: 1.2650
Epoch [ 4/ 20] | d_loss: 1.0429 | g_loss: 0.9364
Epoch [ 4/ 20] | d_loss: 1.5581 | g_loss: 2.6967
Epoch [ 4/ 20] | d_loss: 0.8518 | g_loss: 1.4420
Epoch [ 4/ 20] | d_loss: 1.1658 | g_loss: 0.9626
Epoch [ 4/ 20] | d_loss: 1.0161 | g_loss: 1.9128
Epoch [ 4/ 20] | d_loss: 1.2549 | g_loss: 1.6670
Epoch [ 5/ 20] | d_loss: 1.0224 | g_loss: 0.9585
Epoch [ 5/ 20] | d_loss: 0.9697 | g_loss: 1.3222
Epoch [ 5/ 20] | d_loss: 0.9127 | g_loss: 1.6396
Epoch [ 5/ 20] | d_loss: 0.9980 | g_loss: 0.9969
Epoch [ 5/ 20] | d_loss: 1.3951 | g_loss: 0.9512
Epoch [ 5/ 20] | d_loss: 1.1119 | g_loss: 1.2140
Epoch [ 5/ 20] | d_loss: 1.1610 | g_loss: 0.7973
Epoch [ 5/ 20] | d_loss: 0.9650 | g_loss: 1.4245
Epoch [ 5/ 20] | d_loss: 1.3614 | g_loss: 1.5490
Epoch [ 5/ 20] | d_loss: 0.9901 | g_loss: 1.0379
Epoch [ 5/ 20] | d_loss: 1.0795 | g_loss: 1.4740
Epoch [ 5/ 20] | d_loss: 0.9378 | g_loss: 0.9393
Epoch [ 5/ 20] | d_loss: 1.0541 | g_loss: 0.9625
Epoch [ 5/ 20] | d_loss: 1.1215 | g_loss: 1.7283
Epoch [ 5/ 20] | d_loss: 0.9887 | g_loss: 1.4018
Epoch [ 6/ 20] | d_loss: 0.8670 | g_loss: 1.1408
Epoch [ 6/ 20] | d_loss: 1.0011 | g_loss: 1.3362
Epoch [ 6/ 20] | d_loss: 1.1036 | g_loss: 1.3586
Epoch [ 6/ 20] | d_loss: 0.8444 | g_loss: 0.6748
Epoch [ 6/ 20] | d_loss: 1.1064 | g_loss: 0.5956
Epoch [ 6/ 20] | d_loss: 0.9141 | g_loss: 0.6877
Epoch [ 6/ 20] | d_loss: 1.1119 | g_loss: 0.9553
Epoch [ 6/ 20] | d_loss: 1.0663 | g_loss: 1.1732
Epoch [ 6/ 20] | d_loss: 1.3755 | g_loss: 2.0979
Epoch [ 6/ 20] | d_loss: 0.8307 | g_loss: 1.6909
Epoch [ 6/ 20] | d_loss: 0.9496 | g_loss: 1.3523
Epoch [ 6/ 20] | d_loss: 0.8106 | g_loss: 1.3695
Epoch [ 6/ 20] | d_loss: 1.0070 | g_loss: 0.8168
Epoch [ 6/ 20] | d_loss: 0.7614 | g_loss: 1.4107
Epoch [ 6/ 20] | d_loss: 0.9773 | g_loss: 1.3105
Epoch [ 7/ 20] | d_loss: 0.8796 | g_loss: 1.7589
Epoch [ 7/ 20] | d_loss: 0.8442 | g_loss: 1.7696
Epoch [ 7/ 20] | d_loss: 0.9290 | g_loss: 1.5166
Epoch [ 7/ 20] | d_loss: 0.7971 | g_loss: 1.2538
Epoch [ 7/ 20] | d_loss: 1.3050 | g_loss: 0.6180
Epoch [ 7/ 20] | d_loss: 0.8229 | g_loss: 1.0684
Epoch [ 7/ 20] | d_loss: 0.8912 | g_loss: 0.9604
Epoch [ 7/ 20] | d_loss: 0.4972 | g_loss: 1.9471
Epoch [ 7/ 20] | d_loss: 0.7046 | g_loss: 1.2223
Epoch [ 7/ 20] | d_loss: 1.0304 | g_loss: 0.9111
Epoch [ 7/ 20] | d_loss: 0.8530 | g_loss: 1.4095
Epoch [ 7/ 20] | d_loss: 0.6777 | g_loss: 1.9372
Epoch [ 7/ 20] | d_loss: 0.8228 | g_loss: 1.6562
Epoch [ 7/ 20] | d_loss: 0.8129 | g_loss: 2.1938
Epoch [ 7/ 20] | d_loss: 0.7455 | g_loss: 1.7060
Epoch [ 8/ 20] | d_loss: 0.8210 | g_loss: 1.1450
Epoch [ 8/ 20] | d_loss: 0.8405 | g_loss: 1.7259
Epoch [ 8/ 20] | d_loss: 0.8548 | g_loss: 1.6497
Epoch [ 8/ 20] | d_loss: 0.6861 | g_loss: 1.2953
Epoch [ 8/ 20] | d_loss: 1.0859 | g_loss: 0.5681
Epoch [ 8/ 20] | d_loss: 0.9105 | g_loss: 0.6603
Epoch [ 8/ 20] | d_loss: 0.7669 | g_loss: 1.0739
Epoch [ 8/ 20] | d_loss: 0.6335 | g_loss: 1.2467
Epoch [ 8/ 20] | d_loss: 1.0648 | g_loss: 2.6402
Epoch [ 8/ 20] | d_loss: 0.6987 | g_loss: 1.5866
Epoch [ 8/ 20] | d_loss: 0.9201 | g_loss: 1.4020
Epoch [ 8/ 20] | d_loss: 0.7432 | g_loss: 1.0046
Epoch [ 8/ 20] | d_loss: 0.5945 | g_loss: 1.4572
Epoch [ 8/ 20] | d_loss: 0.7987 | g_loss: 2.1379
Epoch [ 8/ 20] | d_loss: 0.9306 | g_loss: 2.4405
Epoch [ 9/ 20] | d_loss: 0.8540 | g_loss: 1.4815
Epoch [ 9/ 20] | d_loss: 0.8943 | g_loss: 2.1157
Epoch [ 9/ 20] | d_loss: 0.8691 | g_loss: 1.3261
Epoch [ 9/ 20] | d_loss: 0.7507 | g_loss: 1.1041
Epoch [ 9/ 20] | d_loss: 1.8466 | g_loss: 0.4438
Epoch [ 9/ 20] | d_loss: 0.7156 | g_loss: 1.5514
Epoch [ 9/ 20] | d_loss: 0.6838 | g_loss: 1.9572
Epoch [ 9/ 20] | d_loss: 0.4504 | g_loss: 1.9651
Epoch [ 9/ 20] | d_loss: 0.6725 | g_loss: 0.9480
Epoch [ 9/ 20] | d_loss: 0.8477 | g_loss: 2.2960
Epoch [ 9/ 20] | d_loss: 0.7440 | g_loss: 1.1142
Epoch [ 9/ 20] | d_loss: 0.7324 | g_loss: 2.6842
Epoch [ 9/ 20] | d_loss: 0.9042 | g_loss: 1.9172
Epoch [ 9/ 20] | d_loss: 0.8817 | g_loss: 2.0246
Epoch [ 9/ 20] | d_loss: 0.7863 | g_loss: 1.8130
Epoch [ 10/ 20] | d_loss: 0.8587 | g_loss: 1.7674
Epoch [ 10/ 20] | d_loss: 0.9957 | g_loss: 1.8036
Epoch [ 10/ 20] | d_loss: 0.6021 | g_loss: 1.5647
Epoch [ 10/ 20] | d_loss: 0.7990 | g_loss: 2.5954
Epoch [ 10/ 20] | d_loss: 0.9840 | g_loss: 1.2172
Epoch [ 10/ 20] | d_loss: 0.5301 | g_loss: 1.4668
Epoch [ 10/ 20] | d_loss: 0.6544 | g_loss: 1.2098
Epoch [ 10/ 20] | d_loss: 0.3954 | g_loss: 2.7319
Epoch [ 10/ 20] | d_loss: 0.6500 | g_loss: 1.3433
Epoch [ 10/ 20] | d_loss: 0.6584 | g_loss: 2.6642
Epoch [ 10/ 20] | d_loss: 0.8671 | g_loss: 1.7520
Epoch [ 10/ 20] | d_loss: 0.5751 | g_loss: 2.2429
Epoch [ 10/ 20] | d_loss: 0.5664 | g_loss: 2.7798
Epoch [ 10/ 20] | d_loss: 0.6543 | g_loss: 1.5736
Epoch [ 10/ 20] | d_loss: 0.7166 | g_loss: 0.9169
Epoch [ 11/ 20] | d_loss: 1.1040 | g_loss: 2.8111
Epoch [ 11/ 20] | d_loss: 0.5921 | g_loss: 1.5074
Epoch [ 11/ 20] | d_loss: 0.5983 | g_loss: 1.9776
Epoch [ 11/ 20] | d_loss: 0.6688 | g_loss: 1.1112
Epoch [ 11/ 20] | d_loss: 0.9944 | g_loss: 1.3872
Epoch [ 11/ 20] | d_loss: 0.5391 | g_loss: 2.3324
Epoch [ 11/ 20] | d_loss: 0.6676 | g_loss: 2.0671
Epoch [ 11/ 20] | d_loss: 0.6297 | g_loss: 1.7209
Epoch [ 11/ 20] | d_loss: 1.2555 | g_loss: 3.3177
Epoch [ 11/ 20] | d_loss: 0.8165 | g_loss: 3.5583
Epoch [ 11/ 20] | d_loss: 0.5738 | g_loss: 1.3441
Epoch [ 11/ 20] | d_loss: 0.5576 | g_loss: 2.4064
Epoch [ 11/ 20] | d_loss: 0.5270 | g_loss: 1.4145
Epoch [ 11/ 20] | d_loss: 0.5709 | g_loss: 1.8934
Epoch [ 11/ 20] | d_loss: 0.9731 | g_loss: 2.9229
Epoch [ 12/ 20] | d_loss: 0.9454 | g_loss: 1.1146
Epoch [ 12/ 20] | d_loss: 0.5293 | g_loss: 2.3767
Epoch [ 12/ 20] | d_loss: 0.4577 | g_loss: 1.6173
Epoch [ 12/ 20] | d_loss: 0.5427 | g_loss: 1.4374
Epoch [ 12/ 20] | d_loss: 0.7758 | g_loss: 0.6743
Epoch [ 12/ 20] | d_loss: 1.1891 | g_loss: 2.6057
Epoch [ 12/ 20] | d_loss: 0.4601 | g_loss: 1.6892
Epoch [ 12/ 20] | d_loss: 0.5554 | g_loss: 1.8517
Epoch [ 12/ 20] | d_loss: 0.5846 | g_loss: 1.7792
Epoch [ 12/ 20] | d_loss: 0.6275 | g_loss: 2.0653
Epoch [ 12/ 20] | d_loss: 0.7726 | g_loss: 1.8965
Epoch [ 12/ 20] | d_loss: 0.5542 | g_loss: 1.9118
Epoch [ 12/ 20] | d_loss: 0.5655 | g_loss: 1.7891
Epoch [ 12/ 20] | d_loss: 0.4890 | g_loss: 1.3625
Epoch [ 12/ 20] | d_loss: 0.6606 | g_loss: 2.1471
Epoch [ 13/ 20] | d_loss: 0.5435 | g_loss: 1.7101
Epoch [ 13/ 20] | d_loss: 0.6530 | g_loss: 2.1914
Epoch [ 13/ 20] | d_loss: 0.6449 | g_loss: 1.5639
Epoch [ 13/ 20] | d_loss: 0.6654 | g_loss: 2.6487
Epoch [ 13/ 20] | d_loss: 1.0104 | g_loss: 1.8117
Epoch [ 13/ 20] | d_loss: 0.4323 | g_loss: 1.5480
Epoch [ 13/ 20] | d_loss: 0.4679 | g_loss: 2.0748
Epoch [ 13/ 20] | d_loss: 0.3135 | g_loss: 1.3723
Epoch [ 13/ 20] | d_loss: 0.4980 | g_loss: 1.5192
Epoch [ 13/ 20] | d_loss: 1.1467 | g_loss: 3.8900
Epoch [ 13/ 20] | d_loss: 0.6293 | g_loss: 1.4828
Epoch [ 13/ 20] | d_loss: 0.9025 | g_loss: 1.0082
Epoch [ 13/ 20] | d_loss: 0.4315 | g_loss: 1.7448
Epoch [ 13/ 20] | d_loss: 0.5079 | g_loss: 1.8569
Epoch [ 13/ 20] | d_loss: 0.5872 | g_loss: 2.3551
Epoch [ 14/ 20] | d_loss: 0.7500 | g_loss: 1.7908
Epoch [ 14/ 20] | d_loss: 0.6992 | g_loss: 2.4870
Epoch [ 14/ 20] | d_loss: 0.4824 | g_loss: 3.0691
Epoch [ 14/ 20] | d_loss: 0.6089 | g_loss: 2.8987
Epoch [ 14/ 20] | d_loss: 0.8675 | g_loss: 0.7803
Epoch [ 14/ 20] | d_loss: 0.5113 | g_loss: 0.8475
Epoch [ 14/ 20] | d_loss: 0.4796 | g_loss: 2.1828
Epoch [ 14/ 20] | d_loss: 0.4331 | g_loss: 2.8302
Epoch [ 14/ 20] | d_loss: 0.4958 | g_loss: 1.2750
Epoch [ 14/ 20] | d_loss: 0.4831 | g_loss: 1.6834
Epoch [ 14/ 20] | d_loss: 0.6406 | g_loss: 1.3663
Epoch [ 14/ 20] | d_loss: 0.4392 | g_loss: 2.4227
Epoch [ 14/ 20] | d_loss: 0.4749 | g_loss: 2.5893
Epoch [ 14/ 20] | d_loss: 0.5727 | g_loss: 2.0081
Epoch [ 14/ 20] | d_loss: 0.6909 | g_loss: 3.3945
Epoch [ 15/ 20] | d_loss: 0.4974 | g_loss: 2.5697
Epoch [ 15/ 20] | d_loss: 0.3827 | g_loss: 2.4618
Epoch [ 15/ 20] | d_loss: 0.8037 | g_loss: 1.9015
Epoch [ 15/ 20] | d_loss: 0.3503 | g_loss: 0.9107
Epoch [ 15/ 20] | d_loss: 0.8069 | g_loss: 1.1518
Epoch [ 15/ 20] | d_loss: 0.4530 | g_loss: 2.5920
Epoch [ 15/ 20] | d_loss: 0.4420 | g_loss: 2.5245
Epoch [ 15/ 20] | d_loss: 0.3417 | g_loss: 3.1737
Epoch [ 15/ 20] | d_loss: 0.4439 | g_loss: 1.4100
Epoch [ 15/ 20] | d_loss: 0.3594 | g_loss: 2.0192
Epoch [ 15/ 20] | d_loss: 0.7240 | g_loss: 1.6177
Epoch [ 15/ 20] | d_loss: 0.3615 | g_loss: 2.2966
Epoch [ 15/ 20] | d_loss: 0.2967 | g_loss: 2.4527
Epoch [ 15/ 20] | d_loss: 0.3108 | g_loss: 2.2035
Epoch [ 15/ 20] | d_loss: 0.4708 | g_loss: 2.0773
Epoch [ 16/ 20] | d_loss: 0.4967 | g_loss: 1.6103
Epoch [ 16/ 20] | d_loss: 0.4905 | g_loss: 2.4851
Epoch [ 16/ 20] | d_loss: 0.3964 | g_loss: 2.1665
Epoch [ 16/ 20] | d_loss: 0.2904 | g_loss: 3.0102
Epoch [ 16/ 20] | d_loss: 1.1440 | g_loss: 0.9409
Epoch [ 16/ 20] | d_loss: 0.3308 | g_loss: 2.0421
Epoch [ 16/ 20] | d_loss: 0.3962 | g_loss: 1.9286
Epoch [ 16/ 20] | d_loss: 0.3437 | g_loss: 2.8996
Epoch [ 16/ 20] | d_loss: 0.8635 | g_loss: 3.1621
Epoch [ 16/ 20] | d_loss: 0.3686 | g_loss: 2.2134
Epoch [ 16/ 20] | d_loss: 0.4420 | g_loss: 2.1367
Epoch [ 16/ 20] | d_loss: 0.3622 | g_loss: 2.9702
Epoch [ 16/ 20] | d_loss: 0.4797 | g_loss: 2.1969
Epoch [ 16/ 20] | d_loss: 0.2184 | g_loss: 2.0611
Epoch [ 16/ 20] | d_loss: 0.4389 | g_loss: 3.1045
Epoch [ 17/ 20] | d_loss: 0.4218 | g_loss: 1.6837
Epoch [ 17/ 20] | d_loss: 0.4157 | g_loss: 2.5578
Epoch [ 17/ 20] | d_loss: 0.4740 | g_loss: 2.1637
Epoch [ 17/ 20] | d_loss: 0.2930 | g_loss: 2.5534
Epoch [ 17/ 20] | d_loss: 0.9577 | g_loss: 1.4192
Epoch [ 17/ 20] | d_loss: 0.4440 | g_loss: 2.6027
Epoch [ 17/ 20] | d_loss: 0.5257 | g_loss: 2.0481
Epoch [ 17/ 20] | d_loss: 0.2598 | g_loss: 2.7104
Epoch [ 17/ 20] | d_loss: 0.4821 | g_loss: 1.2132
Epoch [ 17/ 20] | d_loss: 0.2953 | g_loss: 2.2692
Epoch [ 17/ 20] | d_loss: 0.4920 | g_loss: 0.8810
Epoch [ 17/ 20] | d_loss: 0.6228 | g_loss: 2.1119
Epoch [ 17/ 20] | d_loss: 0.3434 | g_loss: 2.0269
Epoch [ 17/ 20] | d_loss: 0.2329 | g_loss: 2.5180
Epoch [ 17/ 20] | d_loss: 0.4720 | g_loss: 2.2222
Epoch [ 18/ 20] | d_loss: 0.4034 | g_loss: 1.6099
Epoch [ 18/ 20] | d_loss: 0.2509 | g_loss: 3.0507
Epoch [ 18/ 20] | d_loss: 0.2563 | g_loss: 3.2339
Epoch [ 18/ 20] | d_loss: 0.3215 | g_loss: 3.6712
Epoch [ 18/ 20] | d_loss: 0.6989 | g_loss: 1.2042
Epoch [ 18/ 20] | d_loss: 0.3903 | g_loss: 3.0697
Epoch [ 18/ 20] | d_loss: 0.4079 | g_loss: 2.5796
Epoch [ 18/ 20] | d_loss: 0.3742 | g_loss: 3.0958
Epoch [ 18/ 20] | d_loss: 0.4286 | g_loss: 1.8928
Epoch [ 18/ 20] | d_loss: 0.2898 | g_loss: 2.9517
Epoch [ 18/ 20] | d_loss: 0.3272 | g_loss: 1.4707
Epoch [ 18/ 20] | d_loss: 0.2091 | g_loss: 2.9885
Epoch [ 18/ 20] | d_loss: 0.4379 | g_loss: 2.4095
Epoch [ 18/ 20] | d_loss: 0.3471 | g_loss: 2.1990
Epoch [ 18/ 20] | d_loss: 0.4438 | g_loss: 2.4786
Epoch [ 19/ 20] | d_loss: 0.6878 | g_loss: 2.8637
Epoch [ 19/ 20] | d_loss: 0.4942 | g_loss: 3.5926
Epoch [ 19/ 20] | d_loss: 0.2152 | g_loss: 2.0616
Epoch [ 19/ 20] | d_loss: 0.3006 | g_loss: 2.7155
Epoch [ 19/ 20] | d_loss: 0.8666 | g_loss: 0.9230
Epoch [ 19/ 20] | d_loss: 0.3419 | g_loss: 3.3136
Epoch [ 19/ 20] | d_loss: 0.5593 | g_loss: 2.6553
Epoch [ 19/ 20] | d_loss: 0.2126 | g_loss: 2.8030
Epoch [ 19/ 20] | d_loss: 0.3593 | g_loss: 1.2921
Epoch [ 19/ 20] | d_loss: 0.5482 | g_loss: 1.9387
Epoch [ 19/ 20] | d_loss: 0.3728 | g_loss: 2.1262
Epoch [ 19/ 20] | d_loss: 0.1574 | g_loss: 2.4795
Epoch [ 19/ 20] | d_loss: 0.2342 | g_loss: 2.7301
Epoch [ 19/ 20] | d_loss: 0.2543 | g_loss: 2.4874
Epoch [ 19/ 20] | d_loss: 0.2847 | g_loss: 1.9708
Epoch [ 20/ 20] | d_loss: 0.3749 | g_loss: 1.6547
Epoch [ 20/ 20] | d_loss: 0.5409 | g_loss: 3.0168
Epoch [ 20/ 20] | d_loss: 0.1401 | g_loss: 3.0096
Epoch [ 20/ 20] | d_loss: 0.3265 | g_loss: 3.4995
Epoch [ 20/ 20] | d_loss: 1.1819 | g_loss: 0.3874
Epoch [ 20/ 20] | d_loss: 0.4109 | g_loss: 3.2515
Epoch [ 20/ 20] | d_loss: 0.1746 | g_loss: 3.4196
Epoch [ 20/ 20] | d_loss: 0.2000 | g_loss: 2.6013
Epoch [ 20/ 20] | d_loss: 0.3369 | g_loss: 1.6194
Epoch [ 20/ 20] | d_loss: 0.7840 | g_loss: 2.9401
Epoch [ 20/ 20] | d_loss: 0.3343 | g_loss: 1.8324
Epoch [ 20/ 20] | d_loss: 0.1618 | g_loss: 2.7871
Epoch [ 20/ 20] | d_loss: 0.2252 | g_loss: 2.5882
Epoch [ 20/ 20] | d_loss: 0.4330 | g_loss: 4.1871
Epoch [ 20/ 20] | d_loss: 0.4243 | g_loss: 4.0376
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____ |
.ipynb_checkpoints/1_1_dataset_more_features-checkpoint.ipynb | ###Markdown
With the variables we found so far here, we achieved a maximum performance of 75% (ROC AUC), so let's try to extract some more features in order to increase the model performance Let's find the of acquisitons made by each company
###Code
#I'm considering only Acquisitions made in USA, with USD (dollars)
acquisitions = pd.read_csv('data/acquisitions.csv')
acquisitions = acquisitions[acquisitions['acquirer_country_code'] == 'USA']
acquisitions[:3]
#acquirer_permalink
#rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]})
number_of_acquisitions = acquisitions.groupby(['acquirer_permalink'])['acquirer_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_acquisitions.columns = number_of_acquisitions.columns.droplevel()
number_of_acquisitions.columns = ['permalink', 'number_of_acquisitions']
number_of_acquisitions = number_of_acquisitions.set_index('permalink')
number_of_acquisitions[:3]
###Output
_____no_output_____
###Markdown
Let's find the of investments made by each company
###Code
investments = pd.read_csv('data/investments.csv')
investments = investments[investments['investor_country_code'] == 'USA']
investments[:3]
#acquirer_permalink
#rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]})
number_of_investments = investments.groupby(['investor_permalink'])['investor_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_investments.columns = number_of_investments.columns.droplevel()
number_of_investments.columns = ['permalink', 'number_of_investments']
number_of_investments = number_of_investments.set_index('permalink')
number_of_investments[:3]
#Number of different companies in which each company have invested in
number_of_unique_investments = investments.groupby(['investor_permalink'])['company_permalink'].agg({'amount': [ pd.Series.nunique]}).reset_index()
number_of_unique_investments.columns = number_of_unique_investments.columns.droplevel()
number_of_unique_investments.columns = ['permalink', 'number_of_unique_investments']
number_of_unique_investments = number_of_unique_investments.set_index('permalink')
number_of_unique_investments[:3]
number_of_investors_per_round = investments.groupby(['company_permalink', 'funding_round_permalink'])['investor_permalink'].agg({'investor_permalink': [ pd.Series.count]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'funding_round_permalink', 'count']
number_of_investors_per_round = number_of_investors_per_round.groupby(['company_permalink']).agg({'count': [ pd.Series.mean]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'number_of_investors_per_round']
number_of_investors_per_round = number_of_investors_per_round.set_index('company_permalink')
number_of_investors_per_round[:3]
from numpy import nanmean
#investments['raised_amount_usd'].dtype()
investments['raised_amount_usd'] = investments['raised_amount_usd'].astype(float)
avg_amount_invested_per_round = investments.groupby(['company_permalink', 'funding_round_permalink'])['raised_amount_usd'].agg({'raised_amount_usd': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'funding_round_permalink', 'mean']
avg_amount_invested_per_round = avg_amount_invested_per_round.groupby(['company_permalink']).agg({'mean': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'avg_amount_invested_per_round']
avg_amount_invested_per_round = avg_amount_invested_per_round.set_index('company_permalink')
avg_amount_invested_per_round = avg_amount_invested_per_round.fillna(0)
avg_amount_invested_per_round[:3]
startups = startups.join(number_of_acquisitions).join(number_of_investments).join(number_of_unique_investments).join(number_of_investors_per_round).join(avg_amount_invested_per_round)
startups[['number_of_acquisitions', 'number_of_investments', 'number_of_unique_investments','number_of_investors_per_round', 'avg_amount_invested_per_round']] = startups[['number_of_acquisitions', 'number_of_investments', 'number_of_unique_investments','number_of_investors_per_round', 'avg_amount_invested_per_round']].fillna(value=0)
startups[:3]
startups.to_csv('data/startups_1_1.csv')
###Output
_____no_output_____ |
_notebooks/2020_06_10_Digits_recognition.ipynb | ###Markdown
Digits recognition> Identifying hand written digits MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.In this competition, your goal is to correctly identify digits from a dataset of tens of thousands of handwritten images. We’ve curated a set of tutorial-style kernels which cover everything from regression to neural networks. We encourage you to experiment with different algorithms to learn first-hand what works well and how techniques compare. Dataset explorationFirst, let's load and explore the training dataset
###Code
import pandas as pd
import numpy as np
import os
for dirname, _, filenames in os.walk('./digits/'):
for filename in filenames:
print(os.path.join(dirname, filename))
TRAIN_CSV = './digits/train.csv'
df = pd.read_csv(TRAIN_CSV)
df.describe()
###Output
_____no_output_____
###Markdown
Our dataset has 42000 rows and 785 columns.The first column label is the digit, while the rest 784 columns pixeli represents the value of the i_th pixel
###Code
import seaborn as sn
sn.set()
sn.countplot(x='label', data=df)
###Output
_____no_output_____
###Markdown
We have around 4000 examples of every digit. Lets split our dataset in two parts for training and testing
###Code
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(df.drop(columns=['label']), df['label'], test_size=0.2, random_state=42)
train_labels
sn.countplot(x='label', data=pd.DataFrame(train_labels))
###Output
_____no_output_____
###Markdown
Model trainingNow that we split our dataset between train and test, lets chose a classification model
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(train_features, train_labels)
tree.predict(df.drop(columns=['label']).head())
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=10)
forest.fit(train_features, train_labels)
forest.predict(df.drop(columns=['label']).head())
from sklearn.neural_network import MLPClassifier
nn = MLPClassifier(hidden_layer_sizes=(60, 30, 10), random_state=1)
nn.fit(train_features, train_labels)
nn.predict(df.drop(columns=['label']).head())
###Output
_____no_output_____
###Markdown
Model evaluationLet's evaluate our models using accuracy ad recall.
###Code
from sklearn.metrics import precision_score, recall_score, accuracy_score
eval_df = pd.DataFrame(columns=['model', 'accuracy', 'recall'])
for name, model in {'decision tree': tree, 'random forest': forest, 'neural network': nn}.items():
eval_df = eval_df.append({'model':name, 'accuracy': accuracy_score(test_labels, model.predict(test_features)), 'recall':recall_score(test_labels, model.predict(test_features), average='micro')}, ignore_index=True)
eval_df
eval_df.plot(x='model', y='accuracy', kind='bar')
eval_df.plot(x='model', y='recall', kind='bar')
###Output
_____no_output_____ |
Notebooks/ParkinsonPrediction.ipynb | ###Markdown
Parkinson's Disease Prediction
###Code
#importing libraries
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
# Read data
df = pd.read_csv("../Datasets/parkinsons.data")
df.head()
# Getting the dependent and independent variables from dataset
X = df.loc[:,df.columns!='status'].values[:,1:]
y = df.loc[:,'status'].values
print(X)
print(y)
# Heatmap visulisation for each attribute coefficient correlation.
import seaborn as sb
corr_map=df.corr()
sb.heatmap(corr_map,square=True)
# Counting the zeros and ones in status
print(y[y==1].shape[0])
print(y[y==0].shape[0])
# Splitting the dataset into Training and Testing sets
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=1)
# Feature Scaling using MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
mn = MinMaxScaler()
X_train = mn.fit_transform(X_train)
X_test = mn.transform(X_test)
# Using XGBoost Classifier to train the model
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train,y_train)
# Making Confusion Matrix
from sklearn.metrics import confusion_matrix , accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test,y_pred)
print(cm)
print(accuracy_score(y_test,y_pred)*100)
# Creating a pickle file
import pickle
with open('parkinson_model.pkl','wb') as f:
pickle.dump(classifier,f)
###Output
_____no_output_____
###Markdown
Parkinson's Disease Prediction
###Code
#importing libraries
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
# Read data
df = pd.read_csv("../Datasets/parkinsons.data")
df.head()
# Getting the dependent and independent variables from dataset
X = df.loc[:,df.columns!='status'].values[:,1:]
y = df.loc[:,'status'].values
print(X)
print(y)
# Heatmap visulisation for each attribute coefficient correlation.
import seaborn as sb
corr_map=df.corr()
sb.heatmap(corr_map,square=True)
# Counting the zeros and ones in status
print(y[y==1].shape[0])
print(y[y==0].shape[0])
# Splitting the dataset into Training and Testing sets
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=1)
# Feature Scaling using MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
mn = MinMaxScaler()
X_train = mn.fit_transform(X_train)
X_test = mn.transform(X_test)
# Using XGBoost Classifier to train the model
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train,y_train)
# Making Confusion Matrix
from sklearn.metrics import confusion_matrix , accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test,y_pred)
print(cm)
print(accuracy_score(y_test,y_pred)*100)
# Creating a pickle file
import pickle
with open('parkinson_model.pkl','wb') as f:
pickle.dump(classifier,f)
###Output
_____no_output_____
###Markdown
Parkinson's Disease Prediction
###Code
#importing libraries
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
# Read data
df = pd.read_csv("/content/parkinsons.data")
df.head()
# Getting the dependent and independent variables from dataset
X = df.loc[:,df.columns!='status'].values[:,1:]
y = df.loc[:,'status'].values
print(X)
print(y)
# Heatmap visulisation for each attribute coefficient correlation.
import seaborn as sb
corr_map=df.corr()
sb.heatmap(corr_map,square=True)
# Counting the zeros and ones in status
print(y[y==1].shape[0])
print(y[y==0].shape[0])
# Splitting the dataset into Training and Testing sets
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=0)
# feature scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
#applying PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
variance = pca.explained_variance_ratio_
print(variance)
# Using XGBoost Classifier to train the model
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train,y_train)
#fitting the data in random forest classifier
from sklearn.ensemble import RandomForestClassifier
classifi3 = RandomForestClassifier(n_estimators=16,criterion = "entropy",random_state=0)
classifi3.fit(X_train,y_train)
#predicting reults
y2_pred = classifi3.predict(X_test)
# confusion matrix for random forest classifier
from sklearn.metrics import confusion_matrix , accuracy_score
cm = confusion_matrix(y_test,y2_pred)
print(cm)
# Making Confusion Matrix
from sklearn.metrics import confusion_matrix , accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test,y_pred)
print(cm)
print(accuracy_score(y_test,y2_pred)*100)
print(accuracy_score(y_test,y_pred)*100)
# Creating a pickle file
import pickle
with open('parkinson_model.pkl','wb') as f:
pickle.dump(classifier,f)
###Output
_____no_output_____ |
Lectures/03b_ApplicationFourierTransform.ipynb | ###Markdown
Module 3 Application of Fourier TransformReally, this is an application of the first two modules... Model development of thermocouple responseWe now have all the tools to reanalyze the velocimetry combined with thermocouple data to try and develop a model of the thermocouple dynamic response. From that model, we will be able to infer what the real temperature profile would have been. The velocity data are acquired with a non-intrusive laser based technique called Molecular Tagging Velocimetry (MTV). In addition to being non-intrusive, this technique is minimally perturbative as the tracers are molecules. The latter have no inertia at the speed considered here and the technique can be considered an ideal $0^{th}$ order dynamic system. Because of this instantaneous response, the velocity will be considered the forcing function.\begin{align*}V(t) = K_V F(t)\end{align*}here $F(t)$ is the forcing function. The velocity is directly proportional to the forcing function through the static sensitivity constant $K_V$.Thermocouple dynamic response can be described analytically with a first order dynamic system. \begin{align*}\tau \frac{d y}{dt} + y = K F(t)\end{align*}Neglecting the conduction through the thermcouple wires, the time constant can be shown to be:\begin{align*}\tau = \frac{mC}{h_{sf}A_s}\end{align*}with $m$: mass of thermocouple, $C$ specific heat of the thermcouple, $h_{sf}$ convection heat transfer coefficient from fluid to the sensor, and $A_s$ sensing or wetted surface area of the thermcouple. In the data here, the time constant has been estimated to be $\tau \approx 90$ s with the model above. In particular, it considered that the heat transfer coefficient was constant since the flow around the thermocouple is laminar. However, the model is very conservative: it neglects conduction through the wires and sheath, and the heat transfer coefficient might not be independent of Reynolds number in the present configuration. As a result, do not know the actual time constant of the thermocouple and we would like to estimate it. To do so, we will use the velocity time history as a forcing function. This can be accomplished by numerically integrating the velocity history in the ode or by using Fourier transforms. Let's use this second approach. Harmonic response of first order dynamic system to sinusoidal-inputThis was seen in the previous module and has been slightly updated here. Because, here one needs to consider that the forcing of each harmonic will have a dedicated phase $\Phi$. it is easier to do this using complex notation: $F, \,y$ are complex. So the $k^{th}$ harmonic of the force is $F_{\omega_k}(t) = A_k \mathrm{e}^ {\omega_k t}$, with $\omega_k=2\pi f_k$ the radiant frequency of each harmonic, $A_k$ is complex and includes the phase information in it, $\Phi = \tan^{-1} \frac{\Im{A_k}}{\Re{A_k}}$.After some math, the solution of the $k^{th}$ harmonic to the ode:\begin{align*}y_k(t) = \left( y_0 - \frac{KA_k}{1+\omega_k^2 \tau^2} \left( 1 - i \omega_k \tau \right) \right) e^{-t/\tau} + \frac{KA_k}{1+\omega_k^2\tau^2} (1-i \omega_k \tau) \mathrm{e}^{i \omega_k t} \end{align*}with the steady state phase with respect to the forcing:\begin{align*}\phi_k = \tan^{-1}\left( \frac{\Im{(1-i\omega_k \tau)}}{\Re{(1-i\omega_k \tau)}} \right) = \tan^{-1}(-\omega_k \tau) = -\tan^{-1}(\omega_k \tau)\end{align*}
###Code
import numpy
from matplotlib import pyplot
%matplotlib inline
import csv
import math
dummy_t_V = []
dummy_V = []
with open('data/V_mod.csv') as csvfile_V:
readCSV = csv.reader(csvfile_V, delimiter=',')
for row in readCSV:
dummy_t_V.append(row[0])
dummy_V.append(row[1])
# time is reported in seconds. Data were recorded at 10 Hz.
t_V = [float(i) for i in dummy_t_V]
V = [float(i) for i in dummy_V]
f_s_V = 10 # sampling frequency for velocity (Hz)
dummy_t_TK = []
dummy_TK = []
with open('data/TK_mod.csv') as csvfile_T:
readCSV = csv.reader(csvfile_T, delimiter=',')
for row in readCSV:
dummy_t_TK.append(row[0])
dummy_TK.append(row[1])
# time is reported in seconds. Data were recorded at 1 Hz.
t_TK = [float(i) for i in dummy_t_TK] # time vector
TK = [float(i) for i in dummy_TK] # temperature vector
f_s_TK = 1 # sampling frequency for temperature (Hz)
fig = pyplot.figure(figsize=(6,6))
pyplot.plot(t_V, V,'r')
pyplot.plot(t_TK, TK,'b');
# Normalize values to display on same graph
# select number of points to treat. Will take 256 s for temperature
N_TK = 256
TK = (TK-numpy.min(TK[0:N_TK]))/(numpy.max(TK[0:N_TK])-numpy.min(TK[0:N_TK]))
N_V = int(N_TK * f_s_V/f_s_TK)
V = (V)/(numpy.max(V[0:N_V]))
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_V[0:N_V], V[0:N_V],'r')
pyplot.plot(t_TK[0:N_TK], TK[0:N_TK],'b');
###Output
_____no_output_____
###Markdown
To help with the analysis, we are going to make the data periodic with 0 mean by extending and inverting the data.
###Code
V2 = numpy.zeros(2*N_V)
V2[0:N_V-1] = V[0:N_V-1]
t_V2 = numpy.linspace(0,2*N_V-1, num=2*N_V)/10
for i in range(N_V):
V2[N_V+i] = -V[N_V-1-i]
TK2 = numpy.zeros(2*N_TK)
TK2[0:N_TK-1] = TK[0:N_TK-1]
t_TK2 = numpy.linspace(0,2*N_TK-1, num=2*N_TK)
for i in range(N_TK):
TK2[N_TK+i] = -TK[N_TK-1-i]
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_V2, V2,'r')
pyplot.plot(t_TK2, TK2,'b')
pyplot.ylabel('amplitudes');
pyplot.xlabel('time (s)');
sp_TK2 = numpy.fft.fft(TK2) # compute FFT of TK
sp_V2 = numpy.fft.fft(V2) # compute FFT of V
k_TK2 = numpy.arange(2*N_TK)
frq_TK2 = k_TK2/(2*N_TK/f_s_TK) # two sides frequency raVe
frq_TK2 = frq_TK2[range(int(N_TK))] # one side frequency range
sp1_TK2 = sp_TK2[range(int(N_TK))] # one side spectrum
k_V2 = numpy.arange(2*N_V)
frq_V2 = k_V2/(2*N_V/f_s_V) # two sides frequency range
frq_V2 = frq_V2[range(int(N_V))] # one side frequency range
sp1_V2 = sp_V2[range(int(N_V))] # one side spectrum
fig, ax = pyplot.subplots(2, 1)
ax[0].plot(frq_V2[0:20],abs(sp1_V2[0:20])*2/(2*N_V),'r.'); # plotting the spectrum
ax[0].set_ylabel('|sp(V)|');
ax[1].plot(frq_TK2[0:20],abs(sp1_TK2[0:20])*2/(2*N_TK),'b.') # plotting the spectrum
ax[1].set_xlabel('Freq (Hz)')
ax[1].set_ylabel('|sp(TK)|');
###Output
_____no_output_____
###Markdown
Let's see how many harmonics it takes to reproduce the signals.We are going to define a filter which has value 1 for the harmonic we keep and 0 for the harmonic we reject and we will multiply the filter with the FFT. In essence, this is an ideal low pass filter. When filtering the data, we have to be careful that we keep both the positive and negative frequencies. Could you justify why?
###Code
# filter spectrum to only keep first n harmonics
n_filt = 200
filt = numpy.zeros(2*N_V)
filt[0] = 1
for i in range(n_filt):
filt[i+1] = 1
filt[2*N_V-1-i] = 1
sp_V2_filt = numpy.multiply(filt,sp_V2)
#print(isp_filt)
isp_V2 = numpy.fft.ifft(sp_V2_filt) # Computer inverse FFT, here we have to take ALL the frequencies
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_V2,V2,'r.')
pyplot.plot(t_V2, isp_V2.real,'b.') # plotting the IFFT (keeping only real part, imaginary should be 0 anyway).
pyplot.xlabel('time (s)')
pyplot.ylabel('V');
# filter spectrum to only keep first n harmonics
n_filt = 30
filt = numpy.zeros(2*N_TK)
filt[0] = 1
for i in range(n_filt):
filt[i+1] = 1
filt[2*N_TK-1-i] = 1
sp_TK2_filt = numpy.multiply(filt,sp_TK2)
isp_TK2 = numpy.fft.ifft(sp_TK2_filt) # Computer inverse FFT, here we have to take ALL the frequencies
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_TK2,TK2,'r.')
pyplot.plot(t_TK2, isp_TK2.real,'b.') # plotting the IFFT (keeping only real part, imaginary should be 0 anyway).
pyplot.ylabel('TK')
pyplot.xlabel('time (s)');
import cmath
def y_sin(t,T,tau,y_0,K,A):
''' Calculate the output of a first order ode when forcing function is
a sine wave of frequency f with phase phi
Arguments
---------
t: time (in second)
T : period of the forcing sine wave (s)
tau: time constant of the system (s)
K: static sensitivity (dimensionless)
A: amplitude of forcing (unit of F), complex number
Returns
-------
y_sin : Output of 1st order ode, see eqn above.
'''
omega = 2*numpy.pi/T # convert f to radial frequency
phi = -numpy.arctan(omega*tau)
y_sin = (y_0 - K*A/(1+omega**2*tau**2)*(1-1j*omega*tau))*numpy.exp(-t/tau) \
+ K*A/(1+omega**2*tau**2) * (1-1j*omega*tau) * numpy.exp(1j*omega*t)
return y_sin
tau = 15 # estimated time constant (s)
y_0 = 0
K = 1
t=numpy.linspace(0.0,511.9,num=5120) # (s)
# plot first 5 harmonics
for i in range(1,5): #n_filt):
T_s2 = 2*N_V/(i*f_s_V)
A_k = sp_V2_filt[i]*2/(2*N_V)
y_out = y_sin(t,T_s2,tau,y_0,K,A_k)
pyplot.plot(t, y_out.real); # output
tau = 20 # estimated time constant (s)
N_harmonics = 400
y_reconstructed = numpy.zeros(2*N_V)
for i in range(1, N_harmonics): #n_filt):
T_s2 = 2*N_V/(i*f_s_V)
A_k = sp_V2_filt[i]*2/(2*N_V)
y_reconstructed = y_reconstructed + y_sin(t,T_s2,tau,y_0,K,A_k)
y_reconstructed = y_reconstructed
fig = pyplot.figure(figsize=(10,6))
pyplot.plot(t_TK2,TK2.real,'r.')
# normalize the reconstructed temperature
pyplot.plot(t,y_reconstructed.real/numpy.max(y_reconstructed.real),'b.');
pyplot.xlabel('time (s)')
pyplot.ylabel('amplitude');
###Output
_____no_output_____
###Markdown
Let's comment on the overal shape, similarities and differences. How could you explain the discrepancies. We can now redo the analysis without making the original function periodic
###Code
sp_V = numpy.fft.fft(V) # compute FFT of V
sp_TK = numpy.fft.fft(TK) # compute FFT of TK
k_TK = numpy.arange(N_TK)
frq_TK = k_TK/(N_TK/f_s_TK) # two sides frequency raVe
frq_TK = frq_TK[range(int(N_TK/2))] # one side frequency range
sp1_TK = sp_TK[range(int(N_TK/2))]
k_V = numpy.arange(N_V)
frq_V = k_V/(N_V/f_s_V) # two sides frequency range
frq_V = frq_V[range(int(N_V/2))] # one side frequency range
sp1_V = sp_V[range(int(N_V/2))]
fig, ax = pyplot.subplots(2, 1)
ax[0].plot(frq_V[1:20],abs(sp1_V[1:20])*2/(N_V),'r.') # plotting the spectrum
ax[0].set_ylabel('|sp(V)|');
ax[1].plot(frq_V2[1:20],abs(sp1_V2[1:20])*2/(2*N_V),'r.'); # plotting the spectrum
ax[1].set_xlabel('Freq (Hz)')
ax[1].set_ylabel('|sp(TK)|');
# filter velocity spectrum to only keep first n harmonics
n_filt = 200
filt = numpy.zeros(N_V)
filt[0] = 1
for i in range(n_filt):
filt[i+1] = 1
filt[N_V-1-i] = 1
sp_V_filt = numpy.multiply(filt,sp_V)
isp_V = numpy.fft.ifft(sp_V_filt) # Computer inverse FFT, here we have to take ALL the frequencies
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_V,V,'r.')
pyplot.plot(t_V, isp_V.real,'b.') # plotting the IFFT (keeping only real part, imaginary should be 0 anyway).
pyplot.xlabel('time (s)')
pyplot.ylabel('Amplitude');
# filter spectrum to only keep first n harmonics
n_filt = 30
filt = numpy.zeros(N_TK)
filt[0] = 1
for i in range(n_filt):
filt[i+1] = 1
filt[N_TK-1-i] = 1
sp_TK_filt = numpy.multiply(filt,sp_TK)
isp_TK = numpy.fft.ifft(sp_TK_filt) # Computer inverse FFT, here we have to take ALL the frequencies
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_TK,TK,'r.')
pyplot.plot(t_TK, isp_TK.real,'b.') # plotting the IFFT (keeping only real part, imaginary should be 0 anyway).
pyplot.xlabel('time (s)')
pyplot.ylabel('Amplitude');
tau = 15 # estimated time constant (s)
y_0 = 0
K = 1
t=numpy.linspace(0.0,255.9,num=2560) # (s)
for i in range(1,5): #n_filt):
T_s = N_V/(i*f_s_V)
A_k = sp_V_filt[i]*2/(N_V)
# inverting sine, because of the way I am defining the response to harmonic
y_out = y_sin(t,T_s,tau,y_0,K,A_k)
pyplot.plot(t, y_out.real); # output
#pyplot.plot(t, numpy.sin(2*i*numpy.pi/T_s2*t+Phi)); # output
tau = 15 # estimated time constant (s)
y_reconstructed = numpy.zeros(N_V)
for i in range(1, 200): #n_filt):
T_s = N_V/(i*f_s_V)
A_k = sp_V_filt[i]*2/(N_V)
y_reconstructed = y_reconstructed + y_sin(t,T_s,tau,y_0,K,A_k)
y_reconstructed = y_reconstructed
fig = pyplot.figure(figsize=(12,6))
pyplot.plot(t_TK,TK.real,'r.')
pyplot.plot(t,y_reconstructed.real/numpy.max(y_reconstructed.real),'.b');
pyplot.xlabel('time (s)')
pyplot.ylabel('amplitude');
###Output
_____no_output_____ |
Simple_Feedforward.ipynb | ###Markdown
Simple Feedforward Example
###Code
# Imports to prevent me from messing the code up for Python 3 people
from __future__ import division
from __future__ import print_function
# Regular Python Imports
import numpy as np
import matplotlib.pyplot as plt
import time
from sklearn import datasets # Some nice datasets to use
%matplotlib inline
# Torch imports
import torch
from torch.autograd import Variable, Function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as pytorch_utils
###Output
_____no_output_____
###Markdown
Define the Neural Network Architecture
###Code
class FeedForwardNN(nn.Module):
"""
Simple fully connected, feedforward network.
This should be ready to go for regression,
but for classification an additional softmax will need to be added on top.
Parameters
----------
layer_sizes : iterable of ints
the size of each layer
nonlinearity : class
Type of nonlinearity to us, e.g. "nn.ReLU" or "nn.Tanh".
This does NOT an initialized object, i.e. this
argument does NOT an input of the form "nn.Relu()".
"""
def __init__(self, layer_sizes, nonlinearity=None):
super(FeedForwardNN, self).__init__() # Python class stuff
if nonlinearity is None: # Set the default nonlinearity.
nonlinearity = nn.ReLU
# Build the layers
nlayers = len(layer_sizes)
self.layers = []
for i in range(nlayers - 1):
linear = nn.Linear(layer_sizes[i], layer_sizes[i+1])
self.layers.append(linear)
self.layers.append(nonlinearity())
# Put it together to make a neural network using nn.Sequential
self.feed_forward = nn.Sequential(*self.layers)
def forward(self, x):
# Encode, then decode the network.
x = self.feed_forward(x)
return x
###Output
_____no_output_____
###Markdown
Set Parameters
###Code
# Neural Network Parameters
hidden_layer_sizes = [5] # Number of hidden layers to use. More can be added by adding to this list.
nonlinearity = nn.LeakyReLU # Nonlinear function to use.
# Training parameters
train_frac = 0.7 # Fraction of training points to use
epochs = 2000 # Number of passes to do over the dataset..
b_size = 32 # Number of datapoints to use in each step of the dynamics
learning_rate = 0.01 # Learning rate (step-size for optimizer.)
print_frequency = 100 # How often to print the loss.
###Output
_____no_output_____
###Markdown
Prep the Data Load the iris dataset
###Code
iris_dataset = datasets.load_iris()
X = iris_dataset.data
y = iris_dataset.target
N, n_input = X.shape # Number of points, number of input dimensions
n_classes = np.max(y) + 1 # Number of classes
###Output
_____no_output_____
###Markdown
Do a test-train split.
###Code
N_train = int(train_frac * N)
indices = np.arange(N)
np.random.shuffle(indices)
train_ndxs = indices[:N_train]
test_ndxs = indices[N_train:]
X_train = X[train_ndxs]
y_train = y[train_ndxs]
X_test = X[test_ndxs]
y_test =y[test_ndxs]
###Output
_____no_output_____
###Markdown
Move everything into pytorch data loaders
###Code
# Typecast to pytorch tensors.
X_train_var = torch.from_numpy(X_train).float()
y_train_var = torch.from_numpy(y_train).long()
X_test_var = torch.from_numpy(X_test).float()
y_test_var = torch.from_numpy(y_test).long()
# Construct Tensor Datasets. This is bundles a collection of tensor together for easy indexing
# The i'th element gives the i'th datapoint.
data_train = pytorch_utils.TensorDataset(X_train_var, y_train_var)
data_test = pytorch_utils.TensorDataset(X_test_var, y_test_var)
# This is an iterable we can iterate over to get a batches of data.
train_loader = pytorch_utils.DataLoader(data_train, batch_size=b_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Train the network Initialize the Neural Network
###Code
layer_sizes = [n_input] + hidden_layer_sizes + [n_classes]
model = FeedForwardNN(layer_sizes, nonlinearity)
normalizing_layer = nn.LogSoftmax() # We need a softmax on top of FeedForwardNN because we are doing classification.
###Output
_____no_output_____
###Markdown
Initialize Loss
###Code
loss_fxn = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
Initialize the Optimizer
###Code
optimizer = optim.Adam(model.parameters(),lr=learning_rate)
loss_train = []
for epoch in range(epochs):
epoch_loss = []
for i, data in enumerate(train_loader):
optimizer.zero_grad() # Cleans out any old gradient info.
# Typecast to Variable, the datatype used for backprop
X_i = Variable(data[0], requires_grad=True)
y_i = Variable(data[1], requires_grad=False)
# Run the model forward
Y_i = model(X_i)
Y_i = normalizing_layer(Y_i) # Not all the computation needs to happen inside the net object!
# Evaluate loss and do backprop.
loss = loss_fxn(Y_i, y_i)
loss.backward()
# Run the optimizer.
optimizer.step()
loss_train.append(loss)
epoch_loss.append(loss)
avg_epoch_loss = sum(epoch_loss) / len(epoch_loss)
if epoch % print_frequency == 0:
print('Epoch: %d, Train Loss: %.6f' % (epoch, avg_epoch_loss))
###Output
/home/erik/anaconda/lib/python2.7/site-packages/ipykernel_launcher.py:12: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
if sys.path[0] == '':
###Markdown
Plot the results Get results for all datapoints in numpy
###Code
# Run all data through the network.
all_train_data = Variable(X_train_var)
Y_train_var = normalizing_layer(model(all_train_data))
all_test_data = Variable(X_test_var)
Y_test_var = normalizing_layer(model(all_test_data))
# Typecast back to numpy for plotting
Y_train = np.exp(Y_train_var.data.numpy())
Y_test = np.exp(Y_test_var.data.numpy())
nn_labels_train = np.argmax(np.exp(Y_train), axis=1)
nn_labels_test = np.argmax(np.exp(Y_test), axis=1)
###Output
/home/erik/anaconda/lib/python2.7/site-packages/ipykernel_launcher.py:3: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
This is separate from the ipykernel package so we can avoid doing imports until
/home/erik/anaconda/lib/python2.7/site-packages/ipykernel_launcher.py:5: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
"""
###Markdown
Plot the results
###Code
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10, 8))
axes[0, 0].scatter(X_train[:,1], X_train[:,3], c=nn_labels_train, label='NN Train')
axes[0, 1].scatter(X_train[:,1], X_train[:,3], c=y_train, label='True Train')
axes[1 ,0].scatter(X_test[:,1], X_test[:,3], c=nn_labels_test, label='NN Test')
axes[1, 1].scatter(X_test[:,1], X_test[:,3], c=y_test, label='True Test')
axf = np.array(axes).ravel()
for ax in axf:
ax.legend(loc='upper right')
axes[1, 0].set_xlabel(iris_dataset.feature_names[1])
axes[1, 0].set_ylabel(iris_dataset.feature_names[3])
###Output
_____no_output_____ |
notebooks/ThinLens_ZernikeWFE_and_ParameterizedWFE.ipynb | ###Markdown
Using `poppy`'s ThinLens, ZernikeWFE, and ParameterizedWFE classesThis notebook will show you three different ways to introduce defocus in your model optical system, as well as some of the additional flexibility afforded by the `ZernikeWFE` and `ParameterizedWFE` classes.First off, we import `poppy` and define some useful constants. We're going to use 460 nm light through a 1 meter circular aperture.
###Code
import poppy
poppy.__version__
RADIUS = 1.0 # meters
WAVELENGTH = 460e-9 # meters
PIXSCALE = 0.01 # arcsec / pix
FOV = 1 # arcsec
NWAVES = 1.0
###Output
_____no_output_____
###Markdown
Visualizing the PSF without any defocusThis is just about the simplest optical system we can make. Light illuminates a circular pupil, and is imaged onto a detector.
###Code
osys = poppy.OpticalSystem()
circular_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(circular_aperture)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8, 8))
psf = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Ahh, a nice Airy function. This is a monochromatic PSF at `WAVELENGTH` (460 nm).The `ThinLens` optic lets us introduce defocus specified as number of waves. One wave of defocus means that the maximum of the Airy disk becomes a minimum, with a lot of intensity pushed out into a "donut" around the center of the PSF. Adding a Thin LensLet's add a `ThinLens` in the code to create our optical system. We're going to use 1 wave of defocus, and the same reference wavelength as we're using to calculate our monochromatic psf.
###Code
osys = poppy.OpticalSystem()
circular_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(circular_aperture)
thinlens = poppy.ThinLens(nwaves=NWAVES, reference_wavelength=WAVELENGTH, radius=RADIUS)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8, 8))
psf = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Introducing defocus is just one type of aberration you might want to model. `ThinLens` is a separate class because it allows you to specify defocus in waves relative to a reference wavelength rather than RMS wavefront error. Both techniques are useful, but the specifications for JWST NIRCam are delivered in such a way that it makes sense to implement `ThinLens` in this way. (Just one artifact of POPPY's connection to JWST!)Let's get familiar with `ThinLens`'s big brother, `ZernikeWFE`. Reproducing the ThinLens behavior with the ZernikeWFEZernikeWFE lets us specify a sequence of scaling coefficients for the Zernike basis functions, which are then summed to make a model optical element in our `OpticalSystem` with that behavior. The sequence corresponds to the [Noll indexing convention](https://en.wikipedia.org/wiki/Zernike_polynomialsZernike_polynomials) for 1-D Zernike polynomial indices. The first (or "zeroth") element of the sequence is the coefficient for $Z_{j=1}$, the second for $Z_{j=2}$, and so on.The Noll index for the defocus term, $Z_2^0$, is $Z_{j=4}$.Whereas `ThinLens` uses a number of waves, the scaling coefficients for `ZernikeWFE` are with respect to the normalized RMS wavefront error of 1.0 meters. That would be a huge optical path difference, so coefficients will typically be on the order of the wavelength (expressed in meters).The normalization of ZernikeWFE introduces a factor of $2 \sqrt{3}$, so we calculate our coefficient as:$$k = \frac{\mathrm{N_{waves}} \lambda}{2\sqrt{3}} .$$
###Code
defocus_coefficient = NWAVES * WAVELENGTH / (2 * np.sqrt(3))
coefficients_sequence = [0, 0, 0, defocus_coefficient]
osys = poppy.OpticalSystem()
circular_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(circular_aperture)
thinlens = poppy.ZernikeWFE(radius=RADIUS, coefficients=coefficients_sequence)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8,8))
psf_with_zernikewfe = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Compare the two PSFsTo ensure we've got agreement between the two methods, `poppy.display_psf_difference` will show any discrepancies.
###Code
poppy.display_psf_difference(psf, psf_with_zernikewfe, title='ThinLens vs. ZernikeWFE')
###Output
_____no_output_____
###Markdown
Adding some tilt and astigmatism
###Code
coefficients_sequence = [0, 0, 2e-7, defocus_coefficient, 0, 3e-8]
osys = poppy.OpticalSystem("Testing Thin Lens w/ Zernike Module")
circular_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(circular_aperture)
thinlens = poppy.ZernikeWFE(radius=RADIUS, coefficients=coefficients_sequence)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8,8))
psf_with_astigmatism = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Can we accomplish the same thing with `ParameterizedWFE`?`ParameterizedWFE` lets us specify optical aberrations in terms of a linear combination of basis functions evaluated over the pupil. This is more general than the `ZernikeWFE`, which specifies that you must use the Zernike basis functions to represent the distortion, but we can use `ParameterizedWFE` in an equivalent way if we wish.To specify which basis we want, we supply a `basis_factory` argument. This is a callable (e.g. a function) that gets keyword arguments `nterms` and `npix`, and returns an `nterms` by `npix` by `npix` array containing the first `nterms` terms evaluated over a pupil circumscribed by a circle of diameter `npix`.Two useful basis functions are provided in `poppy.zernike`: `zernike_basis` and `hexike_basis`. The `zernike_basis` allows us to provide equivalent functionality to `ZernikeWFE`, if we wish. Here's what that would look like:
###Code
osys = poppy.OpticalSystem()
circular_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(circular_aperture)
thinlens = poppy.ParameterizedWFE(
coefficients=coefficients_sequence,
basis_factory=poppy.zernike.zernike_basis,
radius=RADIUS
)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8,8))
psf_with_parameterizedwfe = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
What else is `ParameterizedWFE` good for?The ability to specify `basis_factory` means that we're not limited to Zernike polynomials. Suppose we have a telescope with a hexagonal pupil? The correct way to specify Zernike-like aberrations in an orthonormal basis on the unit hexagon is with "hexikes", a modified Zernike basis.Hexikes are computed by `poppy.zernike.hexike_basis`, which we pass in (along with the same coefficients as before) to get an idea of how the hexagon aperture changes things:
###Code
osys = poppy.OpticalSystem()
hex_aperture = poppy.HexagonAperture(side=RADIUS)
osys.add_pupil(hex_aperture)
thinlens = poppy.ParameterizedWFE(
coefficients=coefficients_sequence,
basis_factory=poppy.zernike.hexike_basis,
radius=RADIUS
)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
plt.figure(figsize=(8,8))
psf_with_hexikes = osys.calc_psf(wavelength=WAVELENGTH, display_intermediates=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Scriptability of `ZernikeWFE`The API for `ZernikeWFE` also lends itself well to generating coefficients programmatically and passing it in. Say we have an error budget where we know the following about the RMS wavefront error in the Zernike components: * **Piston**, *j=1* — disregarded for a telescope * **Tilt X**, *j=2* — $\pm$ 100 nm * **Tilt Y**, *j=3* — $\pm$ 100 nm * **Focus**, *j=4* — $\pm$ 50 nm * **Astigmatism 45**, *j=5* — $\pm$ 36 nm * **Astigmatism 0**, *j=6* — $\pm$ 36 nmWe can use `ZernikeWFE` to generate a library of sample PSFs satisfying this error budget. First, we write a short function that can generate coefficients from our specifications.
###Code
wfe_budget = [0, 100, 100, 50, 36, 36]
def generate_coefficients(wfe_budget):
coefficients = []
for term in wfe_budget:
coefficients.append(
np.random.uniform(low=-1e-9 * term, high=1e-9 * term) # convert nm to meters, get value in range
)
return coefficients
###Output
_____no_output_____
###Markdown
Now we use this to generate a few sets of coefficients.
###Code
possible_coefficients = [generate_coefficients(wfe_budget) for i in range(5)]
plt.figure(figsize=(18,2))
for idx, coefficient_set in enumerate(possible_coefficients, start=1):
plt.subplot(1, 5, idx)
osys = poppy.OpticalSystem()
hex_aperture = poppy.CircularAperture(radius=RADIUS)
osys.add_pupil(hex_aperture)
thinlens = poppy.ZernikeWFE(
coefficients=coefficient_set,
radius=RADIUS
)
osys.add_pupil(thinlens)
osys.add_detector(pixelscale=PIXSCALE, fov_arcsec=FOV)
psf = osys.calc_psf(wavelength=WAVELENGTH, display=False)
poppy.display_psf(psf, title="PSF #{}".format(idx))
###Output
_____no_output_____ |
nbs/02_cli.ipynb | ###Markdown
Game> This module contains all of the functions for defining the game loop and logic in `wizardry`.
###Code
# export
from wizardry.game import init_player, game_loop
from fastcore.script import call_parse, Param
# export
@call_parse
def play(n: Param("The number of enemies you want to encounter.", int) = 10):
player = init_player()
game_loop(player, n)
# hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_characters.ipynb.
Converted 01_game.ipynb.
Converted 02_cli.ipynb.
Converted index.ipynb.
###Markdown
CLI tools> Command line tools
###Code
#export
from pathlib import Path
from fastscript import call_parse, Param
import warnings
from time import sleep
from geoget.download import *
#hide
from nbdev.showdoc import *
from nbdev.export import notebook2script
from IPython.core.debugger import set_trace
#export
_repChoices=['ALBERS', 'GEO', 'LAMAZ', 'MERCAT', 'PS', 'ROBIN', 'SNSOID', 'TM', 'UTM']
@call_parse
def geoget_ladsweb(
product:Param("Name of the product", str),
collection:Param("Collection number", str),
tstart:Param("Start of serach window yyyy-mm-dd HH:MM:SS", str),
tend:Param("End of search windo yyyy-mm-dd HH:MM:SS", str),
bbox:Param("Bounding box in format left bottom right top", list),
path_save:Param("Path to save the outputs of the request", str),
bands:Param("List of bands to download", list),
coordsOrTiles:Param("coordsOrTiles parameter", str, choices=["coords", "tiles"])="coords",
daynight:Param("Select images for Day, Night or both", str, choices=['D', 'N', 'DNB'])="DNB",
repName:Param("Reprojection type", str, choices=_repChoices)='GEO',
repPixSize:Param("Pixel size in units depending on the reprojection type", float)=0.01,
repResample:Param("Resampling method", str, choices=['bilinear', 'nearest'])='bilinear',
doMosaic:Param("",str)='False'):
bbox = [int(s) for s in ''.join(bbox).split(' ')]
bands = ''.join(bands).split(' ')
kwargs = {key: value for key, value in locals().items()}
kwargs['bands'] = bands
kwargs['bbox'] = bbox
lads = Ladsweb(**kwargs)
lads_list = lads.split_times()
print(f'Splitting request into {len(lads_list)} orders.')
run_parallel(lads_list, path_save, email, auth)
###Output
_____no_output_____
###Markdown
Here is an example of a .bash file to download data using `geoget_ladsweb`. The `email` and authentication token (`auth`) need to be defined in order for the script to work. To create an account and an authentication token visit https://ladsweb.modaps.eosdis.nasa.gov/. ```bash!/bin/bash -l bbox='-10 36 0 44'product="NPP_VMAES_L1"collection="5000"tstart="2017-10-27 00:00:00"tend='2017-10-27 23:59:59'path_save="/srv/geoget/data"bands="Reflectance_M5 Reflectance_M7 Reflectance_M10 Radiance_M12 Radiance_M15 SolarZenithAngle SatelliteZenithAngle"geoget_ladsweb $product $collection "$tstart" "$tend" "$bbox" $path_save "$bands" --repName "GEO" --repPixSize "0.01" --daynight "D"```
###Code
# export
@call_parse
def geoget_order_manager(path_save:Param("Path where log file is saved.", str)):
return order_manager(path_save)
###Output
_____no_output_____
###Markdown
```bash!/bin/bash -l path_save="/srv/geoget/data"geoget_ladsweb $path_save```
###Code
#hide
notebook2script()
###Output
Converted 00_external.ipynb.
Converted 01_download.ipynb.
Converted 02_cli.ipynb.
Converted index.ipynb.
###Markdown
CLI> API details.
###Code
#hide
from nbdev.showdoc import *
# export
import call_graph
import git
import os
from fastcore.script import *
from tree_sitter import Language
CALL_GRAPH_PATH = call_graph.__path__[0]
# export
@call_parse
def build_languages():
path = CALL_GRAPH_PATH
folder_path = os.path.join(path, "vendor")
if not os.path.exists(folder_path):
os.mkdir(folder_path)
repository_path = os.path.join(folder_path, "tree-sitter-python")
if not os.path.exists(repository_path):
git.Repo.clone_from("https://github.com/tree-sitter/tree-sitter-python", repository_path)
repository_path = os.path.join(folder_path,"tree-sitter-java")
if not os.path.exists(repository_path):
git.Repo.clone_from("https://github.com/tree-sitter/tree-sitter-java", repository_path)
repository_path = os.path.join(folder_path,"tree-sitter-cpp")
if not os.path.exists(repository_path):
git.Repo.clone_from("https://github.com/tree-sitter/tree-sitter-cpp", repository_path)
Language.build_library(
os.path.join(path, 'build/my-languages.so'),
[
os.path.join(path, 'vendor/tree-sitter-python'),
os.path.join(path, 'vendor/tree-sitter-java'),
os.path.join(path, 'vendor/tree-sitter-cpp')
]
)
# hide
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_parsers.ipynb.
Converted 01_graph_generator.ipynb.
Converted 02_cli.ipynb.
Converted index.ipynb.
|
jupyter/accounts.ipynb | ###Markdown
Accounts[index](./index.ipynb) |[balances](./balances.ipynb) |[instruments](./instrumentlookup.ipynb) |[trading](./trading.ipynb)
###Code
from saxo_openapi import API
import saxo_openapi.endpoints.portfolio as pf
from pprint import pprint
import juputil
token = juputil.read_token()
client = API(access_token=token)
###Output
_____no_output_____
###Markdown
Get some account details
###Code
r = pf.accounts.AccountsMe()
rv = client.request(r)
pprint(rv)
juputil.request_info(r)
# process all accounts in Data[]
for acct in rv['Data']:
for k in ['AccountId', 'AccountKey', 'AccountGroupKey', 'ClientId', 'ClientKey']:
print("{:<20s} : {:s}".format(k, acct[k]))
###Output
API-endpoint : openapi/port/v1/accounts/me
METHOD : GET
Response status: 200
AccountId : 9300675
AccountKey : fOA0tvOyQqW2aHpWi9P5bw==
AccountGroupKey : fOA0tvOyQqW2aHpWi9P5bw==
ClientId : 9300675
ClientKey : fOA0tvOyQqW2aHpWi9P5bw==
###Markdown
Get details by the **AccountId**
###Code
# Save the AccountKey from prior request
AccountKey = rv['Data'][0]['AccountKey']
# ... and initiate another request
r = pf.accounts.AccountDetails(AccountKey=AccountKey)
rv = client.request(r)
pprint(rv)
###Output
{'AccountGroupKey': 'fOA0tvOyQqW2aHpWi9P5bw==',
'AccountId': '9300675',
'AccountKey': 'fOA0tvOyQqW2aHpWi9P5bw==',
'AccountType': 'Normal',
'Active': True,
'CanUseCashPositionsAsMarginCollateral': True,
'CfdBorrowingCostsActive': False,
'ClientId': '9300675',
'ClientKey': 'fOA0tvOyQqW2aHpWi9P5bw==',
'CreationDate': '2019-03-11T11:39:00.000000Z',
'Currency': 'EUR',
'CurrencyDecimals': 2,
'DirectMarketAccess': False,
'IndividualMargining': False,
'IsCurrencyConversionAtSettlementTime': True,
'IsMarginTradingAllowed': True,
'IsShareable': False,
'IsTrialAccount': True,
'LegalAssetTypes': ['FxSpot',
'FxForwards',
'FxVanillaOption',
'ContractFutures',
'FuturesOption',
'Stock',
'StockOption',
'CfdOnStock',
'Bond',
'MutualFund',
'CfdOnFutures',
'FxKnockInOption',
'FxKnockOutOption',
'FxOneTouchOption',
'FxNoTouchOption',
'StockIndexOption',
'FuturesStrategy',
'CfdOnIndex',
'StockIndex'],
'Sharing': ['NoSharing'],
'SupportsAccountValueProtectionLimit': False,
'UseCashPositionsAsMarginCollateral': True}
###Markdown
Account informationA lot of endpoints require the *AccountKey* or *ClientKey*. The *saxo_openapi.contrib.session* module providesa function *account_info()* to fetch some key properties as a *named tuple*.
###Code
from saxo_openapi.contrib.session import account_info
ai = account_info(client)
print(ai)
# or by property
print("\nAccountKey: ", ai.AccountKey)
###Output
AcctInfo(ClientId='9300675', ClientKey='fOA0tvOyQqW2aHpWi9P5bw==', AccountId='9300675', AccountKey='fOA0tvOyQqW2aHpWi9P5bw==')
AccountKey: fOA0tvOyQqW2aHpWi9P5bw==
###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) Account endpoints Example: fetch account informationA simple example to retrieve the accounts belonging to a *token*:
###Code
import json
import API.oandapyV20
from API.oandapyV20.endpoints import accounts
from API.authenticate import Authenticate as auth
accountID, access_token = auth('Demo', 'API Simulator')
client = API.oandapyV20.API(access_token=access_token)
r = accounts.AccountList()
response = client.request(r)
print(json.dumps(response, indent=2))
print(accountID, access_token)
###Output
{
"accounts": [
{
"id": "101-001-17385496-001",
"tags": []
},
{
"id": "101-001-17385496-002",
"tags": []
}
]
}
101-001-17385496-002 e82204b2ade9ebde07e3e3a558d19cf6-366da980384779ccac74c6a5b1975e05
###Markdown
Request detailsLets get some details from the request itself after the client performed the request:
###Code
print("API-path: " , r)
print("METHOD: ", r.method)
print("Response status: ", r.status_code)
print("The account id's: ", [acc.get('id') for acc in r.response.get('accounts')])
###Output
API-path: v3/accounts
METHOD: GET
Response status: 200
The account id's: ['101-001-17385496-001', '101-001-17385496-002']
###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) Account endpoints Example: fetch account informationA simple example to retrieve the accounts belonging to a *token*:
###Code
import json
import oandapyV20
import oandapyV20.endpoints.accounts as accounts
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
r = accounts.AccountList()
response = client.request(r)
print(json.dumps(response, indent=2))
###Output
{
"accounts": [
{
"tags": [],
"id": "101-004-1435156-002"
},
{
"tags": [],
"id": "101-004-1435156-001"
}
]
}
###Markdown
Request detailsLets get some details from the request itself after the client performed the request:
###Code
print("API-path: " , r)
print("METHOD: ", r.method)
print("Response status: ", r.status_code)
print("The account id's: ", [acc.get('id') for acc in r.response.get('accounts')])
###Output
API-path: v3/accounts
METHOD: GET
Response status: 200
The account id's: ['101-004-1435156-002', '101-004-1435156-001']
|
assignments/assignment_yourname_class9.ipynb | ###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Transfer Learning****Student Name: Your Name** Assignment InstructionsComing soon, this assignment is being recreated for Fall 2020. Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_9.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Kaggle Submission****Student Name: Your Name** Assignment InstructionsFor this assignment, you will begin by loading a pre-trained neural network that I provide here: [transfer_9.h5](https://data.heatonresearch.com/data/t81-558/networks/transfer_9.h5). You will demonstrate your ability to transfer several layers from this neural network to create a new neural network to be used for feature engineering.The **transfer_9.h5** neural network is composed of the following four layers:```Model: "sequential_7"_________________________________________________________________Layer (type) Output Shape Param =================================================================dense_11 (Dense) (None, 25) 225 _________________________________________________________________dense_12 (Dense) (None, 10) 260 _________________________________________________________________dense_13 (Dense) (None, 3) 33 _________________________________________________________________dense_14 (Dense) (None, 1) 4 =================================================================Total params: 522Trainable params: 522Non-trainable params: 0```You should only use the first three layers. The final dense layer should be removed, exposing the (None, 3) shaped layer as the new output layer. This neuron layer has three neurons. The output from these three layers will become your three engineered features. Complete the following tasks:* Load the Keras neural network **transfer_9.h5**. Note that you will need to download it to either your hard drive or GDrive (if you're using Google CoLab). Keras does not allow the loading of a neural network across HTTP.* Create a new neural network with only the first three layers, drop the (None, 1) shaped layer.* Load the dataset [transfer_data.csv](https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv). * Use all columns as input, but do not use *id* as input. You will need to save the *id* column to build your submission.* Do not z-score or transform the input columns.* Submit the output from the (None, 3) shaped layer, along with the corresponding *id* column. The three output neurons should create columns named *a*, *b*, and *c*.The submit file will look something like:|id|a|b|c||-|-|-|-||1|2.3602087|1.4411213|0||2|0.067718446|1.0037427|0.52129996||3|0.74778837|1.0647631|0.052594826||4|1.0594225|1.1211816|0||...|...|...|...| Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_9.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Kaggle Submission****Student Name: Your Name** Assignment InstructionsFor this assignment you will begin by loading a pretrained neural network that I provide here: [transfer_9.h5](https://data.heatonresearch.com/data/t81-558/networks/transfer_9.h5). You will demonstrate your ability to transfer several layers from this neural network to create a new neural network to be used for feature engineering.The **transfer_9.h5** neural network is composed of the following four layers:```Model: "sequential_7"_________________________________________________________________Layer (type) Output Shape Param =================================================================dense_11 (Dense) (None, 25) 225 _________________________________________________________________dense_12 (Dense) (None, 10) 260 _________________________________________________________________dense_13 (Dense) (None, 3) 33 _________________________________________________________________dense_14 (Dense) (None, 1) 4 =================================================================Total params: 522Trainable params: 522Non-trainable params: 0```You should only use the first three layers. The final dense layer should be removed, exposing the (None, 3) shaped layer as the new output layer. This is a 3-neuron layer. The output from these 3 layers will become your 3 engineered features. Complete the following tasks:* Load the Keras neural network **transfer_9.h5**. Note that you will need to download it to either your hard drive or GDrive (if you're using Google CoLab). Keras does not allow loading of a neural network across HTTP.* Create a new neural network with only the first 3 layers, drop the (None, 1) shaped layer.* Load the dataset [transfer_data.csv](https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv). * Use all columns as input, but do not use *id* as input. You will need to save the *id* column to build your submission.* Do not zscore or transform the input columns.* Submit the output from the (None, 3) shaped layer, along with the corresponding *id* column. The three output neurons should create columns named *a*, *b*, and *c*.The submit file will look something like:|id|a|b|c||-|-|-|-||1|2.3602087|1.4411213|0||2|0.067718446|1.0037427|0.52129996||3|0.74778837|1.0647631|0.052594826||4|1.0594225|1.1211816|0||...|...|...|...| Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
!ls /content/drive/My\ Drive/Colab\ Notebooks
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_5.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Exploring Regularization****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **regu-46-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [regu-46-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/regu-46-spring-2018.csv).You will fit/train a Lasso (L1) linear regression (use Lasso(alpha=0.1)), as described in Class 8. You will submit the coefficients for each of the predictors. The predictors are named x1, x2, x3, etc. The target/y is named *target*. You will submit these coefficients to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Some of the predictors are not important and you will see that the L1 regression assigns their coefficients to zero. Complete the following tasks:* No need to normalize all numerics to zscores and all text/categoricals to dummies. Do not normalize the *target*.* fit an L1 regression.* No need to cross validate.* Your submission should contain the input nane (column name *name*), and your coefficient (column name *coef*). * Your submission dataset will be similar in structure to:name | coef-----|-----id | 9.7631254902808e-06x1 | -0.0x2 | 0.3968072235584259x3 | -0.0004428522370290011x4 | 0.7910792827606201x5 | 0.003930636215955019x6 | -0.005123197101056576 Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
###Code
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import shutil
import os
import requests
import base64
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = "{}-{}".format(name, tv)
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32)
else:
# Regression
return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart.
def chart_regression(pred,y,sort=True):
t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()})
if sort:
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
# This is your student key that I emailed to you at the beginnning of the semester.
key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/resources/t81_558_deep_learning/assignment_yourname_class8.ipynb' # IBM Data Science Workbench
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class8_intro_python.ipynb' # Windows
file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
path = "./data/"
filename_read = os.path.join(path,"regu-46-spring-2018.csv")
df = pd.read_csv(filename_read)
submitDF = pd.DataFrame()
submit(source_file=file,data=submitDF,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Transfer Learning****Student Name: Your Name** Assignment InstructionsThis assignment gives you the chance to explore some of the most advanced pretrained networks available. Keras comes with around 20 pretrained neural networks built-in. You can use these networks right out of the box without modification or extend these networks through transfer learning. For this assignment, I will show you how you can explore these networks and examine their structure. This technique can be a great learning aid to see the structure of some of the most advanced neural networks.To create one of the pretrained neural networks in Keras use the **blah** package. For example, you can create the **Xception** neural network with the following command:```net = tf.keras.applications.Xception()```To see the neural network structure issue the **summary** command:```net.summary()```The **dir** command will tell you what methods and properties are available for the neural network. You will use these functions to extract data from this structure. For example, to see the first layer:```net.layers[0]```To see what type the first layer is:```type(net.layers[0])```To see the internals of that layer:```dir(net.layers[0])```Use these sort of commands to build a table that looks similar to this:|name|input|output|layers|max_layer_wgt|wgt_count||---|---|---|---|---|---||Xception|299 x 299 x 3|1000|134|3.0M|21.8M|VGG16|224 x 224 x 3|1000|23|98.0M|131.9M|VGG19|224 x 224 x 3|1000|26|98.0M|137.0M|...|...|...|...|...|...The meanings of these columns are:* **name** - The name of the network.* **input** - The count/structure of input neurons.* **output** - The count/structure of output neurons.* **layers** - The count of layers.* **max_layer_wgt** - The maximum number of weights in any layer. (as a string)* **wgt_count** - The total count of weights. (as a string)Note, that I do request you to output weight counts a string, such as 10M. I provide a helper function for this. Also note, that I do request the input structure, such as 128 x 128 x 3. You should create a helper function of your own to format this output.Report on the following pretrained neural networks:* Xception* VGG16* VGG19* ResNet50* ResNet101* ResNet152* ResNet50V2* ResNet101V2* ResNet152V2* InceptionV3* InceptionResNetV2* MobileNet* MobileNetV2* DenseNet121* DenseNet169* DenseNet201* NASNetMobile* NASNetLarge* EfficientNetB0* EfficientNetB1* EfficientNetB2* EfficientNetB3* EfficientNetB4* EfficientNetB5* EfficientNetB6* EfficientNetB7 Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2djekrbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
file='/content/drive/My Drive/Colab Notebooks/new_assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
#file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
import numpy as np
import pandas as pd
import tensorflow as tf
lst_names = []
lst_input_count = []
lst_all_weights = []
lst_max_weights = []
lst_input = []
lst_output = []
lst_layers = []
lst_sort = []
# This function is based on the following:
# https://stackoverflow.com/questions/1094841/reusable-library-to-get-human-readable-version-of-file-size
def sizeof_fmt(num, suffix='B'):
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1024.0:
return "%3.1f%s" % (num, unit)
num /= 1024.0
return "%.1f%s" % (num, 'Y')
def process_network(name,net):
pass
# Add code here
process_network("Xception", tf.keras.applications.Xception())
process_network("VGG16", tf.keras.applications.VGG16())
process_network("VGG19", tf.keras.applications.VGG19())
# Add code here
df = pd.DataFrame()
df['name'] = lst_names
df['input'] = lst_input
df['output'] = lst_output
df['layers'] = lst_layers
df['max_layer_wgt'] = lst_max_weights
df['wgt_count'] = lst_all_weights
submit(source_file=file,data=[df],key="y75zXVg7BSaB9FrVznQCA3dSLcKmY1Rp8h00I1QS",no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Kaggle Submission****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **regu-46-spring-2019.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file on my data site, at this location: [regu-46-spring-2019.csv](http://data.heatonresearch.com/data/t81-558/datasets/regu-46-spring-2019.csv).You will fit/train a Lasso (L1) linear regression (use Lasso(alpha=0.1)), as described in Class 8. You will submit the coefficients for each of the predictors. The predictors are named x1, x2, x3, etc. The target/y is named *target*. You will submit these coefficients to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Some of the predictors are not important and you will see that the L1 regression assigns their coefficients to zero. Complete the following tasks:* No need to normalize all numerics to zscores and all text/categoricals to dummies. Do not normalize the *target*.* fit an L1 regression.* No need to cross validate.* Your submission should contain the input nane (column name *name*), and your coefficient (column name *coef*). * Your submission dataset will be similar in structure to:name | coef-----|-----id | 9.7631254902808e-06x1 | -0.0x2 | 0.3968072235584259x3 | -0.0004428522370290011x4 | 0.7910792827606201x5 | 0.003930636215955019x6 | -0.005123197101056576 Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
###Code
import base64
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
from sklearn import preprocessing
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = f"{name}-{x}"
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = f"{name}-{tv}"
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(
target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df[result].values.astype(np.float32), dummies.values.astype(np.float32)
# Regression
return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean())
>= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "ivYj3b2yJY2dvQ9MEQMLe5ECGenGc82p4dywJxtQ" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# file = "C:\\Users\\jeffh\\Dropbox\\school\\teaching\\wustl\\classes\\T81_558_deep_learning\\solutions\\assignment_solution_class8.ipynb"
# Begin assignment
path = "./data/"
filename_read = os.path.join(path,"regu-46-spring-2019.csv")
df = pd.read_csv(filename_read)
# Add assignment code here
submit(source_file=file,data=submitDF,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Predictive Modeling 2****Student Name: Your Name** Assignment InstructionsComing soon. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
###Code
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import shutil
import os
import requests
import base64
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = "{}-{}".format(name, tv)
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32)
else:
# Regression
return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart.
def chart_regression(pred,y,sort=True):
t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()})
if sort:
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 2 Sample CodeThe following code provides a starting point for this assignment.
###Code
# This is your student key that I emailed to you at the beginnning of the semester.
key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows
file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class1.ipynb' # Mac/Linux
df = pd.DataFrame({'a' : [0, 0, 1, 1], 'b' : [0, 1, 0, 1], 'c' : [0, 1, 1, 0]})
submit(source_file=file,data=df,key=key,no=1)
###Output
Success: Submitted assignment 1 for jheaton:
You have submitted this assignment 9 times. (this is fine)
No warnings on your data. You will probably do well, but no guarantee. :-)
###Markdown
Checking Your SubmissionYou can always double check to make sure your submission actually happened. The following utility code will help with that.
###Code
import requests
import pandas as pd
import base64
import os
def list_submits(key):
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key': key},
json={})
if r.status_code == 200:
print("Success: \n{}".format(r.text))
else:
print("Failure: {}".format(r.text))
def display_submit(key,no):
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key': key},
json={'assignment':no})
if r.status_code == 200:
print("Success: \n{}".format(r.text))
else:
print("Failure: {}".format(r.text))
# Show a listing of all submitted assignments.
key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh"
list_submits(key)
# Show one assignment, by number.
display_submit(key,1)
###Output
Success:
Assignment #1: Submitted 9 times, last on: 2017-12-27T12:09:32.895Z
*** Check ***
No warnings on your data. You will probably do well, but no guarantee. :-)
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Kaggle Submission****Student Name: Your Name** Assignment InstructionsFor this assignment you will begin by loading a pretrained neural network that I provide here: [transfer_9.h5](https://data.heatonresearch.com/data/t81-558/networks/transfer_9.h5). You will demonstrate your ability to transfer several layers from this neural network to create a new neural network to be used for feature engineering.The **transfer_9.h5** neural network is composed of the following four layers:```Model: "sequential_7"_________________________________________________________________Layer (type) Output Shape Param =================================================================dense_11 (Dense) (None, 25) 225 _________________________________________________________________dense_12 (Dense) (None, 10) 260 _________________________________________________________________dense_13 (Dense) (None, 3) 33 _________________________________________________________________dense_14 (Dense) (None, 1) 4 =================================================================Total params: 522Trainable params: 522Non-trainable params: 0```You should only use the first three layers. The final dense layer should be removed, exposing the (None, 3) shaped layer as the new output layer. This is a 3-neuron layer. The output from these 3 layers will become your 3 engineered features. Complete the following tasks:* Load the Keras neural network **transfer_9.h5**. Note that you will need to download it to either your hard drive or GDrive (if you're using Google CoLab). Keras does not allow loading of a neural network across HTTP.* Create a new neural network with only the first 3 layers, drop the (None, 1) shaped layer.* Load the dataset [transfer_data.csv](https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv). * Use all columns as input, but do not use *id* as input. You will need to save the *id* column to build your submission.* Do not zscore or transform the input columns.* Submit the output from the (None, 3) shaped layer, along with the corresponding *id* column. The three output neurons should create columns named *a*, *b*, and *c*.The submit file will look something like:|id|a|b|c||-|-|-|-||1|2.3602087|1.4411213|0||2|0.067718446|1.0037427|0.52129996||3|0.74778837|1.0647631|0.052594826||4|1.0594225|1.1211816|0||...|...|...|...| Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# file = "C:\\Users\\jeffh\\Dropbox\\school\\teaching\\wustl\\classes\\T81_558_deep_learning\\solutions\\assignment_solution_class8.ipynb"
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_5.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Kaggle Submission****Student Name: Your Name** Assignment InstructionsFor this assignment you will begin by loading a pretrained neural network that I provide here: [transfer_9.h5](https://data.heatonresearch.com/data/t81-558/networks/transfer_9.h5). You will demonstrate your ability to transfer several layers from this neural network to create a new neural network to be used for feature engineering.The **transfer_9.h5** neural network is composed of the following four layers:```Model: "sequential_7"_________________________________________________________________Layer (type) Output Shape Param =================================================================dense_11 (Dense) (None, 25) 225 _________________________________________________________________dense_12 (Dense) (None, 10) 260 _________________________________________________________________dense_13 (Dense) (None, 3) 33 _________________________________________________________________dense_14 (Dense) (None, 1) 4 =================================================================Total params: 522Trainable params: 522Non-trainable params: 0```You should only use the first three layers. The final dense layer should be removed, exposing the (None, 3) shaped layer as the new output layer. This is a 3-neuron layer. The output from these 3 layers will become your 3 engineered features. Complete the following tasks:* Load the Keras neural network **transfer_9.h5**. Note that you will need to download it to either your hard drive or GDrive (if you're using Google CoLab). Keras does not allow loading of a neural network across HTTP.* Create a new neural network with only the first 3 layers, drop the (None, 1) shaped layer.* Load the dataset [transfer_data.csv](https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv). * Use all columns as input, but do not use *id* as input. You will need to save the *id* column to build your submission.* Do not zscore or transform the input columns.* Submit the output from the (None, 3) shaped layer, along with the corresponding *id* column. The three output neurons should create columns named *a*, *b*, and *c*.The submit file will look something like:|id|a|b|c||-|-|-|-||1|2.3602087|1.4411213|0||2|0.067718446|1.0037427|0.52129996||3|0.74778837|1.0647631|0.052594826||4|1.0594225|1.1211816|0||...|...|...|...| Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
!ls /content/drive/My\ Drive/Colab\ Notebooks
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_9.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Exploring Regularization****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **regu-46-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [regu-46-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/regu-46-spring-2018.csv).You will fit/train a Lasso (L1) linear regression (use Lasso(alpha=0.1)), as described in Class 8. You will submit the coefficients for each of the predictors. The predictors are named x1, x2, x3, etc. The target/y is named *target*. You will submit these coefficients to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Some of the predictors are not important and you will see that the L1 regression assigns their coefficients to zero. Complete the following tasks:* No need to normalize all numerics to zscores and all text/categoricals to dummies. Do not normalize the *target*.* fit an L1 regression.* No need to cross validate.* Your submission should contain the input nane (column name *name*), and your coefficient (column name *coef*). * Your submission dataset will be similar in structure to:name | coef-----|-----id | 9.7631254902808e-06x1 | -0.0x2 | 0.3968072235584259x3 | -0.0004428522370290011x4 | 0.7910792827606201x5 | 0.003930636215955019x6 | -0.005123197101056576 Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
###Code
import base64
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
from sklearn import preprocessing
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = f"{name}-{x}"
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = f"{name}-{tv}"
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(
target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df[result].values.astype(np.float32), dummies.values.astype(np.float32)
# Regression
return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean())
>= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
# This is your student key that I emailed to you at the beginnning of the semester.
key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/resources/t81_558_deep_learning/assignment_yourname_class8.ipynb' # IBM Data Science Workbench
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class8_intro_python.ipynb' # Windows
file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
path = "./data/"
filename_read = os.path.join(path,"regu-46-spring-2018.csv")
df = pd.read_csv(filename_read)
submitDF = pd.DataFrame()
submit(source_file=file,data=submitDF,key=key,no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Transfer Learning****Student Name: Your Name** Assignment InstructionsThis assignment gives you the chance to explore some of the most advanced pretrained networks available. Keras comes with around 20 pretrained neural networks built-in. You can use these networks right out of the box without modification or extend these networks through transfer learning. For this assignment, I will show you how you can explore these networks and examine their structure. This technique can be a great learning aid to see the structure of some of the most advanced neural networks.To create one of the pretrained neural networks in Keras use the **blah** package. For example, you can create the **Xception** neural network with the following command:```net = tf.keras.applications.Xception()```To see the neural network structure issue the **summary** command:```net.summary()```The **dir** command will tell you what methods and properties are available for the neural network. You will use these functions to extract data from this structure. For example, to see the first layer:```net.layers[0]```To see what type the first layer is:```type(net.layers[0])```To see the internals of that layer:```dir(net.layers[0])```Use these sort of commands to build a table that looks similar to this:|name|input|output|layers|max_layer_wgt|wgt_count||---|---|---|---|---|---||Xception|299 x 299 x 3|1000|134|3.0M|21.8M|VGG16|224 x 224 x 3|1000|23|98.0M|131.9M|VGG19|224 x 224 x 3|1000|26|98.0M|137.0M|...|...|...|...|...|...The meanings of these columns are:* **name** - The name of the network.* **input** - The count/structure of input neurons.* **output** - The count/structure of output neurons.* **layers** - The count of layers.* **max_layer_wgt** - The maximum number of weights in any layer. (as a string)* **wgt_count** - The total count of weights. (as a string)Note, that I do request you to output weight counts a string, such as 10M. I provide a helper function for this. Also note, that I do request the input structure, such as 128 x 128 x 3. You should create a helper function of your own to format this output.Report on the following pretrained neural networks:* Xception* VGG16* VGG19* ResNet50* ResNet101* ResNet152V2* InceptionV3* InceptionResNetV2* MobileNet* MobileNetV2* DenseNet121* DenseNet169* DenseNet201* NASNetMobile* NASNetLarge* EfficientNetB7 Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "H3B554uPhc3f8kirGGBYA7cYuDOamhXM87OY6QH1" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
file='/content/drive/MyDrive/Colab Notebooks/assignment_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
# file='/Users/jeff/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
import numpy as np
import pandas as pd
import tensorflow as tf
lst_names = []
lst_input_count = []
lst_all_weights = []
lst_max_weights = []
lst_input = []
lst_output = []
lst_layers = []
lst_sort = []
# This function is based on the following:
# https://stackoverflow.com/questions/1094841/reusable-library-to-get-human-readable-version-of-file-size
def sizeof_fmt(num, suffix='B'):
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1024.0:
return "%3.1f%s" % (num, unit)
num /= 1024.0
return "%.1f%s" % (num, 'Y')
def process_network(name,net):
pass
# Add code here
process_network("Xception", tf.keras.applications.Xception())
process_network("VGG16", tf.keras.applications.VGG16())
process_network("VGG19", tf.keras.applications.VGG19())
# Add code here
df = pd.DataFrame()
df['name'] = lst_names
df['input'] = lst_input
df['output'] = lst_output
df['layers'] = lst_layers
df['max_layer_wgt'] = lst_max_weights
df['wgt_count'] = lst_all_weights
submit(source_file=file,data=[df],key="y75zXVg7BSaB9FrVznQCA3dSLcKmY1Rp8h00I1QS",no=9)
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 9 Assignment: Transfer Learning****Student Name: Your Name** Assignment InstructionsThis assignment gives you the chance to explore some of the most advanced pretrained networks available. Keras comes with around 20 pretrained neural networks built-in. You can use these networks right out of the box without modification or extend these networks through transfer learning. For this assignment, I will show you how you can explore these networks and examine their structure. This technique can be a great learning aid to see the structure of some of the most advanced neural networks.To create one of the pretrained neural networks in Keras use the **application** package. For example, you can create the **Xception** neural network with the following command:```net = tf.keras.applications.Xception()```To see the neural network structure issue the **summary** command:```net.summary()```The **dir** command will tell you what methods and properties are available for the neural network. You will use these functions to extract data from this structure. For example, to see the first layer:```net.layers[0]```To see what type the first layer is:```type(net.layers[0])```To see the internals of that layer:```dir(net.layers[0])```Use these sort of commands to build a table that looks similar to this:|name|input|output|layers|max_layer_wgt|wgt_count||---|---|---|---|---|---||Xception|299 x 299 x 3|1000|134|3.0M|21.8M|VGG16|224 x 224 x 3|1000|23|98.0M|131.9M|VGG19|224 x 224 x 3|1000|26|98.0M|137.0M|...|...|...|...|...|...The meanings of these columns are:* **name** - The name of the network.* **input** - The count/structure of input neurons.* **output** - The count/structure of output neurons.* **layers** - The count of layers.* **max_layer_wgt** - The maximum number of weights in any layer. (as a string)* **wgt_count** - The total count of weights. (as a string)Note, that I do request you to output weight counts a string, such as 10M. I provide a helper function for this. Also note, that I do request the input structure, such as 128 x 128 x 3. You should create a helper function of your own to format this output.Report on the following pretrained neural networks:* Xception* VGG16* VGG19* ResNet50* ResNet101* ResNet152V2* InceptionV3* InceptionResNetV2* MobileNet* MobileNetV2* DenseNet121* DenseNet169* DenseNet201* NASNetMobile* NASNetLarge* EfficientNetB7 Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.**
###Code
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
###Output
_____no_output_____
###Markdown
Assignment 9 Sample CodeThe following code provides a starting point for this assignment.
###Code
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "H3B554uPhc3f8kirGGBYA7cYuDOamhXM87OY6QH1" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
file='/content/drive/MyDrive/Colab Notebooks/assignment_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
# file='/Users/jeff/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
import numpy as np
import pandas as pd
import tensorflow as tf
lst_names = []
lst_input_count = []
lst_all_weights = []
lst_max_weights = []
lst_input = []
lst_output = []
lst_layers = []
lst_sort = []
# This function is based on the following:
# https://stackoverflow.com/questions/1094841/reusable-library-to-get-human-readable-version-of-file-size
def sizeof_fmt(num, suffix='B'):
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1024.0:
return "%3.1f%s" % (num, unit)
num /= 1024.0
return "%.1f%s" % (num, 'Y')
def process_network(name,net):
pass
# Add code here
process_network("Xception", tf.keras.applications.Xception())
process_network("VGG16", tf.keras.applications.VGG16())
process_network("VGG19", tf.keras.applications.VGG19())
# Add code here
df = pd.DataFrame()
df['name'] = lst_names
df['input'] = lst_input
df['output'] = lst_output
df['layers'] = lst_layers
df['max_layer_wgt'] = lst_max_weights
df['wgt_count'] = lst_all_weights
submit(source_file=file,data=[df],key="y75zXVg7BSaB9FrVznQCA3dSLcKmY1Rp8h00I1QS",no=9)
###Output
_____no_output_____ |
Data_Value/Data_optimization/Data_Optimization_on_CIFAR10_DB2_full.ipynb | ###Markdown
Double backward
###Code
class MyDataset():
def __init__(self, dataset):
self.dataset = dataset
def __getitem__(self, index):
data, target = self.dataset[index]
return data, target, index
def __len__(self):
return len(self.dataset)
net.reset()
# restart split
cifar10_trainset = torchvision.datasets.CIFAR10('/home/fmejia/fmejia/Cypercat/cyphercat/datasets//', train=True, transform=transform, download=True)
trainset = MyDataset(cifar10_trainset)
cifar10_trainloader = torch.utils.data.DataLoader(trainset, batch_size = batch_size, shuffle=True, num_workers=2)
def weighted_cross_entropy(logits, label, weight=None):
reduction = 'none'
ignore_index = -100
l = F.nll_loss(F.log_softmax(logits, 1), label, None, None, ignore_index, None, reduction)
return (l*weight).sum()/weight.size()[0]
# if weight.sum() == 0:
# print('weights are zero')
# return (l*weight).sum()
# return (l*weight).sum()/weight.sum()
criterion = nn.CrossEntropyLoss()
lr_synthetic = 1e3
lr = 0.01
data_weights = torch.ones(len(cifar10_trainloader.dataset), requires_grad=True, device = device)
optimizer = optim.Adam([data_weights], lr = lr_synthetic, betas=(0.5, 0.999))
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=40,
gamma=0.5)
# train(net, cifar10_trainloader, cifar10_testloader, optimizer_model, criterion, n_epochs = 10, classes=classes, verbose=False)
cifar10_trainset = torchvision.datasets.CIFAR10('/home/fmejia/fmejia/Cypercat/cyphercat/datasets//', train=True, transform=transform, download=True)
trainset = MyDataset(cifar10_trainset)
cifar10_trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2)
# cifar10_testloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2)
def seed_everything(seed=1234):
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
random.seed(seed)
torch.cuda.manual_seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
# losses = []
# losses_syn = []
# losses_miss = []
# losses_miss2 = []
# accuracy = []
# criterion_miss2 = nn.BCEWithLogitsLoss()
# criterion_miss = nn.L1Loss()
# lr_synthetic = 1e-2
# grad_val = []
# # for i in range(1000):
# n_epoch = 30
# n_restarts = 100
# data_weights = torch.zeros(len(cifar10_trainloader.dataset), requires_grad=True, device = device)
# data_mom = torch.zeros(len(cifar10_trainloader.dataset), requires_grad=True, device = device)
# for ii in range(n_restarts):
# print('n_restart = ' + str(ii))
# net = VGG()
# net.reset()
# net.train()
# w = torch.tensor(net.get_param().cpu().detach().numpy(), requires_grad = True).to(device)
# for jj in range(n_epoch):
# t=time.time()
# for batch, batch_eval in zip(cifar10_trainloader, cifar10_testloader5):
# imgs, labels, ind = batch
# imgs, labels = imgs.to(device), labels.to(device)
# imgs_eval, labels_eval = batch_eval
# imgs_eval, labels_eval = imgs_eval.to(device), labels_eval.to(device)
# ind = ind.to(device)
# ww = (torch.tensor(data_weights[ind], requires_grad=True, device = device))
# ## train with weighted data
# with torch.enable_grad():
# output = net.forward_with_param(imgs, w)
# loss = weighted_cross_entropy(output, labels, torch.sigmoid(ww))
# gw, = torch.autograd.grad(loss, w, grad_outputs = torch.tensor(lr).to(device),create_graph=True)
# net.zero_grad()
# # losses.append(loss.item())
# losses.append(loss.item() * imgs.size(0) / torch.sigmoid(ww).sum())
# # get eval performance
# with torch.enable_grad():
# output = net.forward_with_param(imgs_eval, w)
# l0 = criterion(output, labels_eval)
# dw, = torch.autograd.grad(l0, (w,))
# dgw = dw.neg()
# hvp_grad = torch.autograd.grad(
# outputs=(gw,),
# inputs=[ww],
# grad_outputs=(dgw,)
# )
# # data_weights.data[ind] = data_weights.data[ind] - lr_synthetic * hvp_grad[0]
# data_mom.data[ind] = 0.9 * data_mom.data[ind] + lr_synthetic * hvp_grad[0]
# data_weights.data[ind] = data_weights.data[ind] - data_mom.data[ind]
# net.zero_grad()
# with torch.no_grad():
# w = w.sub(gw).requires_grad_()
# print('epoch ' + str(jj))
# print(time.time()-t)
# ## normalize per class
# class_weight = torch.zeros(10)
# for i, weights in enumerate(data_weights):
# class_weight[trainset.dataset.targets[i]] += weights
# class_norm = class_weight/len(trainset)*10.
# for i in range(len(trainset)):
# data_weights[i] += -class_norm[trainset.dataset.targets[i]]
# # output = net.forward_with_param(imgs_eval, w)
# # loss = criterion(output, labels_eval)
# loss_sum = 0
# for batch in cifar10_testloader:
# imgs, labels = batch
# imgs, labels = imgs.to(device), labels.to(device)
# output = net.forward_with_param(imgs, w)
# loss = criterion(output, labels)
# loss_sum += loss.item()
# losses_syn.append(loss_sum)
# print('accuracy plots')
# plt.plot(losses)
# plt.show()
# plt.plot(losses_syn)
# plt.grid(True)
# plt.show()
# # print((F.sigmoid(data_weights).round().eq(torch.ones(50000).to(device)).sum()).float()/50000)
# plt.hist(F.sigmoid(data_weights).squeeze().cpu().detach().numpy())
# plt.show()
# with open('data_weights_DB_class_norm.pickle', 'wb') as f:
# pickle.dump(data_weights, f)
# # plt.hist(F.sigmoid(data_weights[ind00[class_idx]]).squeeze().cpu().detach().numpy())
# # plt.show()
# # plt.hist(F.sigmoid(data_weights[ind00]).squeeze().cpu().detach().numpy())
# # plt.show()
def eval_target_net(net, testloader, w, classes=None):
if classes is not None:
class_correct = np.zeros(10)
class_total = np.zeros(10)
total = 0
correct = 0
with torch.no_grad():
net.eval()
for i, (imgs, lbls) in enumerate(testloader):
imgs, lbls = imgs.to(device), lbls.to(device)
output = net.forward_with_param(imgs, w)
predicted = output.argmax(dim=1)
total += imgs.size(0)
correct += predicted.eq(lbls).sum().item()
if classes is not None:
for prediction, lbl in zip(predicted, lbls):
class_correct[lbl] += prediction == lbl
class_total[lbl] += 1
if classes is not None:
for i in range(len(classes)):
print('Accuracy of %s : %.2f %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
print("\nTotal accuracy = %.2f %%\n\n" % (100*(correct/total)) )
return((100*(correct/total)))
def get_batch(dataloader, ind_list):
imgs = torch.zeros(len(ind_list), 3, 32,32)
labels = torch.zeros(len(ind_list))
for count, i in enumerate(ind_list):
imgs[count,:,:,:], labels[count], _ = dataloader.dataset.__getitem__(i)
return imgs, labels
losses = []
losses_syn = []
losses_miss = []
losses_miss2 = []
accuracy = []
criterion_miss2 = nn.BCEWithLogitsLoss()
criterion_miss = nn.L1Loss()
lr_synthetic = 1e-2
lr = 0.01
grad_val = []
n_epoch = 5
n_restarts = 1
data_weights = torch.zeros(len(cifar10_trainloader.dataset), requires_grad=True, device = device)
data_mom = torch.zeros(len(cifar10_trainloader.dataset), requires_grad=True, device = device)
ind_list = []
w0 = torch.tensor(net.get_param().cpu().detach().numpy(), requires_grad = True).to(device)
for ii in range(n_restarts):
print('n_restart = ' + str(ii))
net = VGG()
net.reset()
net.train()
w = torch.tensor(net.get_param().cpu().detach().numpy(), requires_grad = True).to(device)
w0 = torch.tensor(net.get_param().cpu().detach().numpy(), requires_grad = True).to(device)
w_mom = torch.zeros(w.size()).to(device)
## Forward through training
for jj in range(n_epoch):
for batch in cifar10_trainloader:
imgs, labels, im_ind = batch
imgs, labels = imgs.to(device), labels.to(device)
ind_list.append(torch.tensor(im_ind))
## train with weighted data
with torch.enable_grad():
output = net.forward_with_param(imgs, w)
# loss = weighted_cross_entropy(output, labels, torch.sigmoid(ww))
loss = criterion(output, labels)
gw, = torch.autograd.grad(loss, w, grad_outputs = torch.tensor(lr).to(device),create_graph=True)
with torch.no_grad():
w_mom = 0.9 * w_mom + gw
w = w.sub(w_mom).requires_grad_()
net.zero_grad()
losses.append(loss.item())
print('epoch ' + str(jj))
print('accuracy plots')
plt.plot(losses)
plt.show()
eval_target_net(net, cifar10_testloader, w, classes=None)
# # Evaluation loss
# dw = torch.zeros(w.size()).to(device)
# for batch in cifar10_testloader:
# imgs, labels = batch
# imgs, labels = imgs.to(device), labels.to(device)
# outputs = net.forward_with_param(imgs, w)
# loss_test = criterion(outputs, labels)
# dummy, = torch.autograd.grad(loss_test, (w,))
# dw += dummy.detach()
## Backward through training
ind_list.reverse()
count = 0
for ind in ind_list:
imgs, labels = get_batch(cifar10_trainloader, ind)
imgs, labels = imgs.to(device), labels.long().to(device)
ww = (torch.tensor(data_weights[ind], requires_grad=True, device = device))
with torch.no_grad():
w = w.add(w_mom).requires_grad_()
## train with weighted data
with torch.enable_grad():
output = net.forward_with_param(imgs, w)
# loss = weighted_cross_entropy(output, labels, torch.sigmoid(ww))
loss = criterion(output, labels)
# loss = weighted_cross_entropy(output, labels, torch.ones(ww.size()).to(device))
gw, = torch.autograd.grad(loss, w, grad_outputs = torch.tensor(lr).to(device),create_graph=True)
with torch.no_grad():
w_mom = (w_mom - gw) / 0.9
net.zero_grad()
losses.append(loss.item())
count +=1
print(count)
print(loss.item())
if loss.item() > 10:
break
if count%391 == 0:
print('epoch ' + str(count/391))
print('accuracy plots')
plt.plot(losses)
plt.show()
imgs, labels = get_batch(cifar10_trainloader, ind_list[0])
imgs, labels = imgs.to(device), labels.long().to(device)
with torch.enable_grad():
output = net.forward_with_param(imgs, w)
# loss = weighted_cross_entropy(output, labels, torch.sigmoid(ww))
loss = criterion(output, labels)
print(loss)
print(ww)
print(w)
print(w_mom)
print(loss)
print(gw)
## Backward through training
ind_list.reverse()
ind = ind_list[0]
imgs0, labels0 = get_batch(cifar10_trainloader, ind)
imgs0, labels0 = imgs0.to(device), labels0.long().to(device)
with torch.no_grad():
w = w.add(w_mom).requires_grad_()
with torch.enable_grad():
output = net.forward_with_param(imgs0, w)
loss0 = criterion(output, labels)
print(loss0)
gw0, = torch.autograd.grad(loss0, w, grad_outputs = torch.tensor(lr).to(device),create_graph=True)
print(gw0)
###Output
_____no_output_____ |
Step4_IRDetectForAllData.ipynb | ###Markdown
For videos that are merged
###Code
# trouble shoot IR traces for problem visits
IR_list = []
for moth in sensor_path:
sensor = io.loadmat(moth[0])
sensor_fname = moth[1]
IR = sensor['IR']
print(moth[1])
print(sensor['IR'])
if moth[1].startswith('L0.1_c-3_m46_3'):
cutframe = 1988-1688
IR = IR[cutframe:]
IR_list.append((IR , sensor_fname))
names = ['L0.1_c-3_m37','L0.1_c-3_m38', 'L0.1_c-3_m8', 'L0.1_c-3_m8']
prob = [40050,27933, 67219, 75799]
df = pd.DataFrame({"pro": prob, "name":names})
df
for name in IR_list:
figure = plt.figure()
ir = name[0]
val = df[df.name == name[1][:-4]].pro.values
print(len(val), val)
plt.plot(ir[0:val[0]+10000])
plt.scatter(val, len(val)*[max(ir)], c = 'b')
plt.title(name[1][:-4])
plt.savefig(path + "//" + name[1][:-4] +".png")
#used for combining sections of videos that have dropped frames
low_45 = np.concatenate((IR_list[0][0], IR_list[1][0]), axis=0)
low_46 = np.concatenate((IR_list[2][0], IR_list[3][0], IR_list[4][0]), axis=0)
high_50 = np.concatenate((IR_list[-2][0], IR_list[-1][0]), axis=0)
filtered, diff, IRdetect = get_diff(low_45)
np.save(outpath + "\\" + 'L0.1_c-3_m45.mat' + '_IRdetect', arr = IRdetect)
filtered, diff, IRdetect = get_diff(low_46)
np.save(outpath + "\\" + 'L0.1_c-3_m46.mat' + '_IRdetect', arr = IRdetect)
filtered, diff, IRdetect = get_diff(low_45)
np.save(outpath + "\\" + 'L50_c-3_m50.mat' + '_IRdetect', arr = IRdetect)
###Output
_____no_output_____
###Markdown
_tanvi_: redoing data for L50_c-3_m50 - that seems like it got copied wrong above
###Code
data_path = r"D:\PaperDrafts_DanielLab\Multisensory - Light levels Paper\DataUploadForDryad\MotionAnalysis_Final\IR/"
sensor_path = glob.glob(data_path + 'L50_c-3_m50*.mat')
sensor_path
outpath = r"D:\PaperDrafts_DanielLab\Multisensory - Light levels Paper\DataUploadForDryad\MotionAnalysis_Final\Step4/"
IRlist = []
sensor_fname = 'L50_c-3_m50.mat'
for moth in sensor_path:
sensor = io.loadmat(moth)
IR = sensor['IR']
print(len(IR))
IRlist.append(IR)
high_50 = np.concatenate((IRlist[0], IRlist[1]), axis=0)
filtered, diff, IRdetect = get_diff(high_50)
np.save(outpath + "\\" + sensor_fname + '_IRdetect', arr = IRdetect)
#am not creating figures for this one
len(high_50), 33083+87107
len(IRdetect)
###Output
_____no_output_____ |
code/14.movielens_recommendation-systems.ipynb | ###Markdown
****** 使用GraphLab进行电影推荐******
###Code
import graphlab
graphlab.canvas.set_target("ipynb")
# set canvas to show sframes and sgraphs in ipython notebook
import matplotlib.pyplot as plt
%matplotlib inline
# download data from: http://files.grouplens.org/datasets/movielens/ml-1m.zip
data = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/ratings.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
for col in data.column_names():
data[col] = data[col].astype(int)
data.rename({'X.0': 'user_id', 'X.1': 'movie_id', 'X.2': 'rating', 'X.3': 'timestamp'})
data.save('ratings')
users = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/users.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
users.rename({'X.0': 'user_id', 'X.1': 'gender', 'X.2': 'age', 'X.3': 'occupation', 'X.4': 'zip-code'})
users['user_id'] = users['user_id'].astype(int)
users.save('users')
items = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/movies.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
items.rename({'X.0': 'movie_id', 'X.1': 'title', 'X.2': 'genre'})
items['movie_id'] = items['movie_id'].astype(int)
items.save('items')
data.show()
items.head()
data = data.join(items, on='movie_id')
data
(train_set, test_set) = data.random_split(0.95, seed=1)
m = graphlab.recommender.create(train_set, 'user_id', 'movie_id', 'rating')
m
m2 = graphlab.item_similarity_recommender.create(train_set, 'user_id', 'movie_id', 'rating',
similarity_type='pearson')
m2
result = graphlab.recommender.util.compare_models(test_set, [m, m2],
user_sample=.1, skip_set=train_set)
###Output
compare_models: using 562 users to estimate model performance
PROGRESS: Evaluate model M0
Precision and recall summary statistics by cutoff
+--------+-----------------+------------------+
| cutoff | mean_precision | mean_recall |
+--------+-----------------+------------------+
| 2 | 0.0435943060498 | 0.00956472275563 |
| 4 | 0.0333629893238 | 0.0148154269344 |
| 6 | 0.0308422301305 | 0.0200992907447 |
| 8 | 0.0289145907473 | 0.0259425986711 |
| 10 | 0.0274021352313 | 0.0287214600249 |
| 12 | 0.0260972716489 | 0.0337773113572 |
| 14 | 0.0263091001525 | 0.0394111159869 |
| 16 | 0.0256895017794 | 0.0462196778187 |
| 18 | 0.0250098853302 | 0.050977761984 |
| 20 | 0.0248220640569 | 0.0552180941837 |
+--------+-----------------+------------------+
[10 rows x 3 columns]
Overall RMSE: 0.906418088677
Per User RMSE (best)
+---------+-------+-----------------+
| user_id | count | rmse |
+---------+-------+-----------------+
| 5909 | 1 | 0.0473437604915 |
+---------+-------+-----------------+
[1 rows x 3 columns]
Per User RMSE (worst)
+---------+-------+---------------+
| user_id | count | rmse |
+---------+-------+---------------+
| 2379 | 1 | 3.30603390451 |
+---------+-------+---------------+
[1 rows x 3 columns]
Per Item RMSE (best)
+----------+-------+-------------------+
| movie_id | count | rmse |
+----------+-------+-------------------+
| 3407 | 1 | 0.000624169056996 |
+----------+-------+-------------------+
[1 rows x 3 columns]
Per Item RMSE (worst)
+----------+-------+---------------+
| movie_id | count | rmse |
+----------+-------+---------------+
| 3747 | 1 | 3.91489813071 |
+----------+-------+---------------+
[1 rows x 3 columns]
PROGRESS: Evaluate model M1
Precision and recall summary statistics by cutoff
+--------+-------------------+-------------------+
| cutoff | mean_precision | mean_recall |
+--------+-------------------+-------------------+
| 2 | 0.000889679715302 | 0.000296559905101 |
| 4 | 0.000444839857651 | 0.000296559905101 |
| 6 | 0.000593119810202 | 0.000889679715302 |
| 8 | 0.000667259786477 | 0.00133451957295 |
| 10 | 0.000711743772242 | 0.00169039145907 |
| 12 | 0.000593119810202 | 0.00169039145907 |
| 14 | 0.000762582613116 | 0.00215747330961 |
| 16 | 0.000667259786477 | 0.00215747330961 |
| 18 | 0.000691973111902 | 0.00230575326216 |
| 20 | 0.000800711743772 | 0.00236830044214 |
+--------+-------------------+-------------------+
[10 rows x 3 columns]
PROGRESS: Finished prediction in 0.09301s
Overall RMSE: 0.869846693134
Per User RMSE (best)
+---------+-------+-----------------+
| user_id | count | rmse |
+---------+-------+-----------------+
| 3350 | 1 | 0.0357205929343 |
+---------+-------+-----------------+
[1 rows x 3 columns]
Per User RMSE (worst)
+---------+-------+---------------+
| user_id | count | rmse |
+---------+-------+---------------+
| 200 | 1 | 3.72375859435 |
+---------+-------+---------------+
[1 rows x 3 columns]
Per Item RMSE (best)
+----------+-------+------------------+
| movie_id | count | rmse |
+----------+-------+------------------+
| 2273 | 1 | 0.00162381395374 |
+----------+-------+------------------+
[1 rows x 3 columns]
Per Item RMSE (worst)
+----------+-------+---------------+
| movie_id | count | rmse |
+----------+-------+---------------+
| 627 | 1 | 4.12012186276 |
+----------+-------+---------------+
[1 rows x 3 columns]
###Markdown
Getting similar items
###Code
m.get_similar_items([1287]) # movie_id is Ben-Hur
m.get_similar_items([1287]).join(items, on={'similar': 'movie_id'}).sort('rank')
###Output
PROGRESS: Getting similar items completed in 0.001121
###Markdown
Making recommendations
###Code
recs = m.recommend()
recs
data[data['user_id'] == 4].join(items, on='movie_id')
m.recommend(users=[4], k=20).join(items, on='movie_id')
m.recommend?
###Output
_____no_output_____
###Markdown
Recommendations for new users
###Code
recent_data = graphlab.SFrame()
recent_data['movie_id'] = [1291]
recent_data['user_id'] = 99999
m2.recommend(users=[99999], new_observation_data=recent_data).join(items, on='movie_id').sort('rank')
###Output
_____no_output_____
###Markdown
Saving and loading models
###Code
m.save('my_model')
m_again = graphlab.load_model('my_model')
m_again
###Output
_____no_output_____
###Markdown
****** 使用GraphLab进行电影推荐******
###Code
import graphlab
graphlab.canvas.set_target("ipynb")
# set canvas to show sframes and sgraphs in ipython notebook
import matplotlib.pyplot as plt
%matplotlib inline
# download data from: http://files.grouplens.org/datasets/movielens/ml-1m.zip
data = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/ratings.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
for col in data.column_names():
data[col] = data[col].astype(int)
data.rename({'X.0': 'user_id', 'X.1': 'movie_id', 'X.2': 'rating', 'X.3': 'timestamp'})
data.save('ratings')
users = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/users.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
users.rename({'X.0': 'user_id', 'X.1': 'gender', 'X.2': 'age', 'X.3': 'occupation', 'X.4': 'zip-code'})
users['user_id'] = users['user_id'].astype(int)
users.save('users')
items = graphlab.SFrame.read_csv('/Users/chengjun/bigdata/ml-1m/movies.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
items.rename({'X.0': 'movie_id', 'X.1': 'title', 'X.2': 'genre'})
items['movie_id'] = items['movie_id'].astype(int)
items.save('items')
data.show()
items.head()
data = data.join(items, on='movie_id')
data
(train_set, test_set) = data.random_split(0.95, seed=1)
m = graphlab.recommender.create(train_set, 'user_id', 'movie_id', 'rating')
m
m2 = graphlab.item_similarity_recommender.create(train_set, 'user_id', 'movie_id', 'rating',
similarity_type='pearson')
m2
result = graphlab.recommender.util.compare_models(test_set, [m, m2],
user_sample=.1, skip_set=train_set)
###Output
compare_models: using 562 users to estimate model performance
PROGRESS: Evaluate model M0
Precision and recall summary statistics by cutoff
+--------+-----------------+------------------+
| cutoff | mean_precision | mean_recall |
+--------+-----------------+------------------+
| 2 | 0.0435943060498 | 0.00956472275563 |
| 4 | 0.0333629893238 | 0.0148154269344 |
| 6 | 0.0308422301305 | 0.0200992907447 |
| 8 | 0.0289145907473 | 0.0259425986711 |
| 10 | 0.0274021352313 | 0.0287214600249 |
| 12 | 0.0260972716489 | 0.0337773113572 |
| 14 | 0.0263091001525 | 0.0394111159869 |
| 16 | 0.0256895017794 | 0.0462196778187 |
| 18 | 0.0250098853302 | 0.050977761984 |
| 20 | 0.0248220640569 | 0.0552180941837 |
+--------+-----------------+------------------+
[10 rows x 3 columns]
Overall RMSE: 0.906418088677
Per User RMSE (best)
+---------+-------+-----------------+
| user_id | count | rmse |
+---------+-------+-----------------+
| 5909 | 1 | 0.0473437604915 |
+---------+-------+-----------------+
[1 rows x 3 columns]
Per User RMSE (worst)
+---------+-------+---------------+
| user_id | count | rmse |
+---------+-------+---------------+
| 2379 | 1 | 3.30603390451 |
+---------+-------+---------------+
[1 rows x 3 columns]
Per Item RMSE (best)
+----------+-------+-------------------+
| movie_id | count | rmse |
+----------+-------+-------------------+
| 3407 | 1 | 0.000624169056996 |
+----------+-------+-------------------+
[1 rows x 3 columns]
Per Item RMSE (worst)
+----------+-------+---------------+
| movie_id | count | rmse |
+----------+-------+---------------+
| 3747 | 1 | 3.91489813071 |
+----------+-------+---------------+
[1 rows x 3 columns]
PROGRESS: Evaluate model M1
Precision and recall summary statistics by cutoff
+--------+-------------------+-------------------+
| cutoff | mean_precision | mean_recall |
+--------+-------------------+-------------------+
| 2 | 0.000889679715302 | 0.000296559905101 |
| 4 | 0.000444839857651 | 0.000296559905101 |
| 6 | 0.000593119810202 | 0.000889679715302 |
| 8 | 0.000667259786477 | 0.00133451957295 |
| 10 | 0.000711743772242 | 0.00169039145907 |
| 12 | 0.000593119810202 | 0.00169039145907 |
| 14 | 0.000762582613116 | 0.00215747330961 |
| 16 | 0.000667259786477 | 0.00215747330961 |
| 18 | 0.000691973111902 | 0.00230575326216 |
| 20 | 0.000800711743772 | 0.00236830044214 |
+--------+-------------------+-------------------+
[10 rows x 3 columns]
PROGRESS: Finished prediction in 0.09301s
Overall RMSE: 0.869846693134
Per User RMSE (best)
+---------+-------+-----------------+
| user_id | count | rmse |
+---------+-------+-----------------+
| 3350 | 1 | 0.0357205929343 |
+---------+-------+-----------------+
[1 rows x 3 columns]
Per User RMSE (worst)
+---------+-------+---------------+
| user_id | count | rmse |
+---------+-------+---------------+
| 200 | 1 | 3.72375859435 |
+---------+-------+---------------+
[1 rows x 3 columns]
Per Item RMSE (best)
+----------+-------+------------------+
| movie_id | count | rmse |
+----------+-------+------------------+
| 2273 | 1 | 0.00162381395374 |
+----------+-------+------------------+
[1 rows x 3 columns]
Per Item RMSE (worst)
+----------+-------+---------------+
| movie_id | count | rmse |
+----------+-------+---------------+
| 627 | 1 | 4.12012186276 |
+----------+-------+---------------+
[1 rows x 3 columns]
###Markdown
Getting similar items
###Code
m.get_similar_items([1287]) # movie_id is Ben-Hur
m.get_similar_items([1287]).join(items, on={'similar': 'movie_id'}).sort('rank')
###Output
PROGRESS: Getting similar items completed in 0.001121
###Markdown
Making recommendations
###Code
recs = m.recommend()
recs
data[data['user_id'] == 4].join(items, on='movie_id')
m.recommend(users=[4], k=20).join(items, on='movie_id')
m.recommend?
###Output
_____no_output_____
###Markdown
Recommendations for new users
###Code
recent_data = graphlab.SFrame()
recent_data['movie_id'] = [1291]
recent_data['user_id'] = 99999
m2.recommend(users=[99999], new_observation_data=recent_data).join(items, on='movie_id').sort('rank')
###Output
_____no_output_____
###Markdown
Saving and loading models
###Code
m.save('my_model')
m_again = graphlab.load_model('my_model')
m_again
###Output
_____no_output_____
###Markdown
****** 使用Turicreate进行电影推荐******
###Code
import turicreate as tc
# set canvas to show sframes and sgraphs in ipython notebook
# import matplotlib.pyplot as plt
# %matplotlib inline
# download data from: http://files.grouplens.org/datasets/movielens/ml-1m.zip
data = tc.SFrame.read_csv('/Users/datalab/bigdata/cjc/ml-1m/ratings.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
for col in data.column_names():
data[col] = data[col].astype(int)
data = data.rename({'X.0': 'user_id', 'X.1': 'movie_id', 'X.2': 'rating', 'X.3': 'timestamp'})
#data.save('ratings')
users = tc.SFrame.read_csv('/Users/datalab/bigdata/cjc/ml-1m/users.dat', delimiter='\n',
header=False)['X1'].apply(lambda x: x.split('::')).unpack()
users = users.rename({'X.0': 'user_id', 'X.1': 'gender', 'X.2': 'age', 'X.3': 'occupation', 'X.4': 'zip-code'})
users['user_id'] = users['user_id'].astype(int)
users.save('users')
#items = tc.SFrame.read_csv('/Users/datalab/bigdata/ml-1m/movies.dat', delimiter='\n', header=False)#['X1'].apply(lambda x: x.split('::')).unpack()
# items = items.rename({'X.0': 'movie_id', 'X.1': 'title', 'X.2': 'genre'})
# items['movie_id'] = items['movie_id'].astype(int)
# items.save('items')
data
#items
users
#data = data.join(items, on='movie_id')
#data
train_set, test_set = data.random_split(0.95, seed=1)
m = tc.recommender.create(train_set, 'user_id', 'movie_id', 'rating')
m
m2 = tc.item_similarity_recommender.create(train_set,
'user_id', 'movie_id', 'rating',
similarity_type='pearson')
m2
result = tc.recommender.util.compare_models(test_set,
[m, m2],
user_sample=.5, skip_set=train_set)
###Output
compare_models: using 2811 users to estimate model performance
PROGRESS: Evaluate model M0
###Markdown
Getting similar items
###Code
m.get_similar_items([1287]) # movie_id is Ben-Hur
help(m.get_similar_items)
###Output
Help on method get_similar_items in module graphlab.toolkits.recommender.util:
get_similar_items(self, items=None, k=10, verbose=False) method of graphlab.toolkits.recommender.ranking_factorization_recommender.RankingFactorizationRecommender instance
Get the k most similar items for each item in items.
Each type of recommender has its own model for the similarity
between items. For example, the item_similarity_recommender will
return the most similar items according to the user-chosen
similarity; the factorization_recommender will return the
nearest items based on the cosine similarity between latent item
factors.
Parameters
----------
items : SArray or list; optional
An :class:`~graphlab.SArray` or list of item ids for which to get
similar items. If 'None', then return the `k` most similar items for
all items in the training set.
k : int, optional
The number of similar items for each item.
verbose : bool, optional
Progress printing is shown.
Returns
-------
out : SFrame
A SFrame with the top ranked similar items for each item. The
columns `item`, 'similar', 'score' and 'rank', where
`item` matches the item column name specified at training time.
The 'rank' is between 1 and `k` and 'score' gives the similarity
score of that item. The value of the score depends on the method
used for computing item similarities.
Examples
--------
>>> sf = graphlab.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"],
'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"]})
>>> m = graphlab.item_similarity_recommender.create(sf)
>>> nn = m.get_similar_items()
###Markdown
'score' gives the similarity score of that item
###Code
# m.get_similar_items([1287]).join(items, on={'similar': 'movie_id'}).sort('rank')
###Output
_____no_output_____
###Markdown
Making recommendations
###Code
recs = m.recommend()
recs
data[data['user_id'] == 4]
# m.recommend(users=[4], k=20).join(items, on='movie_id')
###Output
_____no_output_____
###Markdown
Recommendations for new users
###Code
recent_data = tc.SFrame()
recent_data['movie_id'] = [30, 1000, 900, 883, 251, 200, 199, 180, 120, 991, 1212]
recent_data['user_id'] = 99999
recent_data['rating'] = [2, 1, 3, 4, 0, 0, 1, 1, 1, 2, 3]
recent_data
m2.recommend(users=[99999], new_observation_data=recent_data)#.join(items, on='movie_id').sort('rank')
###Output
_____no_output_____
###Markdown
Saving and loading models
###Code
m.save('my_model')
m_again = graphlab.load_model('my_model')
m_again
###Output
_____no_output_____ |
master/corrdemo.ipynb | ###Markdown
Demo Template Matching using Normalized Cross Correlation In this lesson we show the template matching method to localize a given pattern in a image. We use the NCC - Normalized Cross-Correlation.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
###Output
_____no_output_____
###Markdown
Image input and template extraction This code reads a gray scale image and extracts a piece as template.
###Code
import numpy as np
f = mpimg.imread('../data/cameraman.tif')
nb = ia.nbshow(2)
nb.nbshow(f, title='f: Original Image')
(r0,c0) = 25,106
N = 17
t = f[r0:r0+N,c0:c0+N]
nb.nbshow(t, title='t: Template')
nb.nbshow()
###Output
_____no_output_____
###Markdown
Direct Image Cross Correlation Direct image correlation is not a efficient procedure as gray levels and illuminations issues contain strong variations. $$ c(u,v) = \sum_{x,y} f(x,y)t(x-u,y-v)$$
###Code
f=f.astype(np.float)
t=t.astype(np.float)
c = ia.pconv(f, t[::-1,::-1])
ia.adshow(ia.normalize(c,[0,255]), title='Cross Correlation')
(row,col) = np.unravel_index(np.argmax(c),c.shape) - np.array([N-1,N-1])
print('found best match at (%3.0f,%3.0f)\n' %(row,col))
###Output
_____no_output_____
###Markdown
NCC - Normalized Cross Correlation It is necessary to subtract the image from it´s mean in order to make all regions(light or dark) receive the same importance value.The normalization factor can be used to improve the model detection.$$ \gamma(u,v) = \frac{\sum_{x,y} [ f(x,y) - \overline{f}_{u,v}][t(x-u,y-v) - \overline{t}]}{\sqrt{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}]^2 \sum_{x,y}[t(x-u,y-u)-\overline{t}]^2}}$$If our concert in only to find the maximum response of $\gamma$, the above equation can be simplified in:$$ \gamma_1(u,v) = \frac{\sum_{x,y} [f(x,y) t'(x-u,y-v)]}{\sqrt{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}]^2}}$$where $t'$ is the template subtracted from its mean. The denominator can be further simplified:$$ \gamma_1(u,v) = \frac{\sum_{x,y} [f(x,y) t'(x-u,y-v)]}{\sqrt{\sum_{x,y}f^2(x,y)-\frac{1}{n}[\sum_{x,y}f(x,y)]^2}}$$ Using periodic convolution, the above equation can result in:$$ \gamma_1 = \frac{f \ast (\breve{t}-\overline{t})} {\sqrt{[f^2 \ast i] - \frac{1}{n}(f \ast i)^2}}$$
###Code
%%time
n = t.size
t1 = t[::-1,::-1] - t.mean()
num = ia.pconv(f,t1)
i = np.ones(t.shape)
fm2 = ia.pconv(f*f, i)
fm = ia.pconv(f,i)
den = np.sqrt(fm2 - fm*fm/n)
gamma1 = num/den
nb.nbshow(ia.normalize(num), title='numerator')
nb.nbshow(ia.normalize(den), title='denominator')
nb.nbshow(ia.normalize(gamma1), title='gamma1')
(row,col) = np.unravel_index(np.argmax(gamma1),gamma1.shape) - np.array([N-1,N-1])
print('found best match at (%3.0f,%3.0f)\n' %(row,col))
nb.nbshow()
###Output
found best match at ( 25,106)
###Markdown
Demo Template Matching using Normalized Cross Correlation In this lesson we show the template matching method to localize a given pattern in a image. We use the NCC - Normalized Cross-Correlation.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
ea979path = os.path.abspath('../../')
if ea979path not in sys.path:
sys.path.append(ea979path)
import ea979.src as ia
###Output
_____no_output_____
###Markdown
Image input and template extraction This code reads a gray scale image and extracts a piece as template.
###Code
import numpy as np
f = mpimg.imread('../data/cameraman.tif')
nb = ia.nbshow(2)
nb.nbshow(f, title='f: Original Image')
(r0,c0) = 25,106
N = 17
t = f[r0:r0+N,c0:c0+N]
nb.nbshow(t, title='t: Template')
nb.nbshow()
###Output
_____no_output_____
###Markdown
Direct Image Cross Correlation Direct image correlation is not a efficient procedure as gray levels and illuminations issues contain strong variations. $$ c(u,v) = \sum_{x,y} f(x,y)t(x-u,y-v)$$
###Code
f=f.astype(np.float)
t=t.astype(np.float)
c = ia.pconv(f, t[::-1,::-1])
ia.adshow(ia.normalize(c,[0,255]), title='Cross Correlation')
(row,col) = np.unravel_index(np.argmax(c),c.shape) - np.array([N-1,N-1])
print('found best match at (%3.0f,%3.0f)\n' %(row,col))
###Output
_____no_output_____
###Markdown
NCC - Normalized Cross Correlation It is necessary to subtract the image from it´s mean in order to make all regions(light or dark) receive the same importance value.The normalization factor can be used to improve the model detection.$$ \gamma(u,v) = \frac{\sum_{x,y} [ f(x,y) - \overline{f}_{u,v}][t(x-u,y-v) - \overline{t}]}{\sqrt{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}]^2 \sum_{x,y}[t(x-u,y-u)-\overline{t}]^2}}$$If our concert in only to find the maximum response of $\gamma$, the above equation can be simplified in:$$ \gamma_1(u,v) = \frac{\sum_{x,y} [f(x,y) t'(x-u,y-v)]}{\sqrt{\sum_{x,y}[f(x,y)-\overline{f}_{u,v}]^2}}$$where $t'$ is the template subtracted from its mean. The denominator can be further simplified:$$ \gamma_1(u,v) = \frac{\sum_{x,y} [f(x,y) t'(x-u,y-v)]}{\sqrt{\sum_{x,y}f^2(x,y)-\frac{1}{n}[\sum_{x,y}f(x,y)]^2}}$$ Using periodic convolution, the above equation can result in:$$ \gamma_1 = \frac{f \ast (\breve{t}-\overline{t})} {\sqrt{[f^2 \ast i] - \frac{1}{n}(f \ast i)^2}}$$
###Code
%%time
n = t.size
t1 = t[::-1,::-1] - t.mean()
num = ia.pconv(f,t1)
i = np.ones(t.shape)
fm2 = ia.pconv(f*f, i)
fm = ia.pconv(f,i)
den = np.sqrt(fm2 - fm*fm/n)
gamma1 = num/den
nb.nbshow(ia.normalize(num), title='numerator')
nb.nbshow(ia.normalize(den), title='denominator')
nb.nbshow(ia.normalize(gamma1), title='gamma1')
(row,col) = np.unravel_index(np.argmax(gamma1),gamma1.shape) - np.array([N-1,N-1])
print('found best match at (%3.0f,%3.0f)\n' %(row,col))
nb.nbshow()
###Output
found best match at ( 25,106)
|
Codes/[Prep]데이터_전처리_train_data_생성.ipynb | ###Markdown
1
train_preped_01.csv
변수 8개
- date
- 2019-07-04부터 시작(제출수의 결측치 때문에)
- 사용자
- 세션
- 신규방문자
- 페이지뷰
- cnt_signin
- cnt_login
- cnt_sub
- total_participants
###Code
def train_prep(df):
df['DateTime'] = pd.to_datetime(df['DateTime'])
df['date'] = df.DateTime.dt.date
df = df.groupby('date').sum().reset_index()
return df
def info_prep(df, col='count'):
# date 변수 추출
df['c_time'] = pd.to_datetime(df['c_time'])
df['date'] = df['c_time'].dt.date
# missing value 제거
df = df.dropna(how='all') # 모든 row가 missing value 일 때
df = df.groupby('date')['date'].count().to_frame(name=col).reset_index()
return df
train = train_prep(train_raw)
info_user = info_prep(info_user_raw, 'cnt_signin')
info_login = info_prep(info_login_raw, 'cnt_login')
info_sub = info_prep(info_sub_raw, 'cnt_sub')
sub = sub_raw.copy()
sub['date'] = sub['DateTime'].dt.date
sub = sub[['date', '사용자', '세션', '신규방문자', '페이지뷰']]
train['isTrain'] = 1
sub['isTrain'] = 0
data = pd.concat([train, sub], axis=0).reset_index(drop=True)
data = data.merge(info_user, on='date', how='left')
data = data.merge(info_login, on='date', how='left')
data = data.merge(info_sub, on='date', how='left')
data.isna().sum()
utils.check_date(info_user) # cnt_signin : 결측치 0으로 채우기
utils.check_date(info_login) # cnt_login : 결측치 부분 데이터 제거
# 맨 앞에 있는 14일 빼고는 데이터 전부 있음
utils.check_date(info_sub) # cnt_sub : 2019-07-04이후로 유의미한 데이터라고 판단됨
# 만약 이 변수를 사용한다면 2019-07-04 이후 결측치를 0으로 바꿔서 해당 구간만 사용하기
# 결측치 처리
data['cnt_signin'] = data.cnt_signin.fillna(0)
data['cnt_sub'] = data.cnt_sub.fillna(0)
data['date'] = pd.to_datetime(data['date'])
data = data[data['date'] >= '2019-07-04'].reset_index(drop=True)
info_cpt = utils.info_cpt_prep(info_cpt_raw) # date | name(대회이름)s | total_participants 인 df
data = data.merge(info_cpt[['date', 'total_participants']], on='date', how='left')
data = data[['date', '사용자', '세션', '신규방문자', '페이지뷰', 'cnt_signin',
'cnt_login', 'cnt_sub', 'total_participants', 'isTrain']]
data.head()
data.shape
data.to_csv('/content/drive/MyDrive/dacon/daconcup/Data/all_preped_01.csv', index=False)
###Output
_____no_output_____
###Markdown
2
train_preped_02.csv
###Code
###Output
_____no_output_____ |
example-notebooks/datasets/plot_volume2D.ipynb | ###Markdown
Plot 2D Volume DataThis plots example volume data onto an example subject, S1, onto a flatmapusing quickflat. In order for this to run, you have to have a flatmap forthis subject in the pycortex filestore.The cortex.Volume2D object is instantiated with two numpy arrays of the samesize as the scan for this subject and transform. Here, there are two datasetsthat have been generated to look like gradients across the brain, but you canreplace these with any numpy arrays of the correct dimensionality.The colormap used in the first two flatmaps isAs with a 1D Volume, you can change vmin and vmax to threshold, but herethey can be manipulated individually for the two arrays.You can also change the colormap when creating a new 2D volume. The colormapused in the last flatmap is
###Code
import cortex
import numpy as np
import matplotlib.pyplot as plt
subject = "S1"
xfm = "fullhead"
# Creating two different test datasets that are both the same shape as this
# transform with one entry for each voxel
# The matrices have just been reordered in different ways so that they make
# gradients across the brain in different directions
test_data1 = np.arange(31 * 100 * 100).reshape((31, 100, 100), order='C')
test_data2 = np.arange(31 * 100 * 100).reshape((31, 100, 100), order='F')
# This creates a 2D Volume object for both of our test datasets for the given
# subject and transform
vol_data = cortex.Volume2D(test_data1, test_data2, subject, xfm)
cortex.quickshow(vol_data, with_colorbar=False)
plt.show()
# You can alter the minimum and maximum values shown on the colorbar and this
# can be done separately for the two different datasets
vol_data = cortex.Volume2D(test_data1, test_data2, subject, xfm,
vmin=np.mean(test_data1), vmax=np.max(test_data1),
vmin2=np.min(test_data2), vmax2=np.mean(test_data2))
cortex.quickshow(vol_data, with_colorbar=False)
plt.show()
# To change the colormap, you have to create a new Volume2D object
vol_color = cortex.Volume2D(test_data1, test_data2, subject, xfm,
cmap="GreenWhiteBlue_2D")
cortex.quickshow(vol_color, with_colorbar=False)
plt.show()
###Output
_____no_output_____ |
tutorials/ActionPotential.ipynb | ###Markdown
Leaky integrate and fire neuron by 3 neural nodesIn this example we show how a single node performs temperol summation, a keyfeature in real neuron. Within a certian period in time, input signals are summed and if the resulting potential reaches above a threshold value, an output signal is generated.
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
import time
# load the modules specific to this project
from context import network as nw
from context import physics
from context import timemarching as tm
from context import plotter
from context import logger
plt.rcParams['figure.dpi'] = 100 # 200 e.g. is really fine, but slower
###Output
_____no_output_____
###Markdown
1. Define the broadcasting channels of the networkThis is done by creating a list of the channel names. The names are arbitrary and can be set by the user, such as 'postive', 'negative' or explicit wavelenghts like '870 nm', '700 nm'. Here I chose the colors 'red' and 'blue'.
###Code
channel_list = ['blue','red']
# Automatically generate the object that handles them
channels = {channel_list[v] : v for v in range(len(channel_list))}
###Output
_____no_output_____
###Markdown
2. Define the layersDefine the layers of nodes in terms of how they are connected to the channels. Layers and weights are organized in dictionaries. The input and output layers do not need to be changed, but for the hidden layer we need to specify the number of nodes N and assign the correct channels to the input/output of the node.
###Code
# Create layers ordered from 0 to P organized in a dictionary
layers = {}
# An input layer automatically creates on node for each channel that we define
layers[0] = nw.InputLayer(input_channels=channels)
# Forward signal layer
layers[1] = nw.HiddenLayer(N=1, output_channel='blue',excitation_channel='blue',inhibition_channel='red')
# Inhibiting memory layer
layers[2] = nw.HiddenLayer(N=1, output_channel='red' ,excitation_channel='blue',inhibition_channel='red')
layers[3] = nw.HiddenLayer(N=1, output_channel='blue',excitation_channel='blue',inhibition_channel='red')
layers[4] = nw.OutputLayer(output_channels=channels) # similar to input layer
###Output
_____no_output_____
###Markdown
3. Define existing connections between layersThe weights are set in two steps. First the connetions between layers are defined. This should be done using the keys defined for each layer above, i.e. 0, 1, 2 ... for input, hidden and output layers, respectively. The `connect_layers` function returns a weight matrix object that we store under a chosen key, for example `'inp->hid'`.Second, the specific connections on the node-to-node level are specified using the node index in each layer
###Code
# Define the overall connectivity
weights = {}
# The syntax is connect_layers(from_layer, to_layer, layers, channels)
weights['inp->hd0'] = nw.connect_layers(0, 1, layers, channels)
weights['hd0->hd1'] = nw.connect_layers(1, 2, layers, channels)
weights['hd0->out'] = nw.connect_layers(1, 4, layers, channels)
# Backwards connection from the memory
weights['hd1->hd0'] = nw.connect_layers(2, 1, layers, channels)
# Loop connection with the third hidden layer
weights['hd0->hd2'] = nw.connect_layers(1, 3, layers, channels)
# Backwards connection from third layer
weights['hd2->hd0'] = nw.connect_layers(3, 1, layers, channels)
# Define the specific node-to-node connections in the weight matrices
self_inhib = 0.65
self_excite = 2.0
# The syntax is connect_nodes(from_node, to_node, channel=label, weight=value in weight matrix)
# Input to first ring layer node
weights['inp->hd0'].connect_nodes(channels['blue'] ,0, channel='blue', weight=1.0) # channels['blue']=1
#weights['inp->hd0'].connect_nodes(channels['red'] ,0, channel='red', weight=1.0) # channels['blue']=1
# Hidden layer connections
weights['hd0->hd1'].connect_nodes(0 ,0 , channel='blue', weight=self_inhib)
# Loop connections
weights['hd0->hd2'].connect_nodes(0 ,0 , channel='blue', weight=self_excite)
weights['hd2->hd0'].connect_nodes(0 ,0 , channel='blue', weight=self_excite)
# Add damping connection
weights['hd1->hd0'].connect_nodes(0 ,0 , channel='red', weight=self_inhib)
# Connect to output
weights['hd0->out'].connect_nodes(0, channels['blue'], channel='blue', weight=0.9)
###Output
_____no_output_____
###Markdown
4. Visualize the network The `plotter` module supplies functions to visualize the network structure. The nodes are named by the layer type (Input, Hidden or Output) and the index. To supress the printing of weight values on each connection, please supply `show_edge_labels=False`. Available layouts:**multipartite**: Standard neural network appearance. Hard to see recurrent couplings within layers. **circular**: Nodes drawn as a circle **shell**: Layers drawn as concetric circles **kamada_kawai**: Optimization to minimize weighted internode distance in graph **spring**: Spring layout which is standard in `networkx` Shell layoutThis is my current favorite. It is configured to plot the input and output nodes on the outside of the hidden layer circle, in a combined outer concentric circle.
###Code
plotter.visualize_network(layers, weights, layout='shell', show_edge_labels=False,shell_order=[1,[2,3],[0,4]],exclude_nodes={0: ['I1'], 4: ['O1']},savefig=True)
###Output
_____no_output_____
###Markdown
5. Specify the physics of the nodesBefore running any simulations, we need to specify the input currents and the physics of the hidden layer nodes. Parameters can either be specified directly or coupled from the `physics` module.
###Code
# Specify different types of devices for the hidden layers
PtGate = physics.Device('../parameters/device_parameters_PtGate.txt')
AuGate = physics.Device('../parameters/device_parameters.txt')
# Tune the Rstore of the main node
PtGate.set_parameter('Rstore',5e6)
print('Rstore for PtGate device:')
PtGate.print_parameter('Rstore')
print('Rstore for AuGate device:')
AuGate.print_parameter('Rstore')
# 2. Memory (modify the parameters)
memory = physics.Device('../parameters/device_parameters_PtGate.txt')
memory.set_parameter('Rstore',2e7)
print('Rstore for memory device:')
memory.print_parameter('Rstore')
# Plot the two different transistors
plotter.visualize_transistor([AuGate.transistorIV_example(),PtGate.transistorIV_example()],labels=['AuGate-','PtGate-'])
# Specify the internal dynamics of each layer by assigning a device
layers[1].assign_device(PtGate)
layers[2].assign_device(memory)
layers[3].assign_device(AuGate)
# Tweak the threshold voltage
layers[1].Vthres=1.2 # main node
layers[2].Vthres=0.9 # memory, default value is 1.2 V
layers[3].Vthres=0.35 # loop excitation node
# Memory layer Vthres
print('Main node Vthres=',layers[1].Vthres)
print('Memory layer Vthres=', layers[2].Vthres)
print('Loop node Vthres=', layers[3].Vthres)
# Calculate the unity_coeff to scale the weights accordingly
unity_coeff, Imax = AuGate.inverse_gain_coefficient(PtGate.eta_ABC, layers[3].Vthres)
print(f'Unity coupling coefficient calculated as unity_coeff={unity_coeff:.4f}')
print(f'Imax is found to be {Imax} nA')
# Specify an exciting arbitrary pulse train mixing 0.5 and 1 ns pulses
t_blue = [(5.0,6.0), (8.0,8.5), (9.0,9,5), (10.0,11.0), (23.0,24.0), (30.0,31.0)] #
#t_blue = [(5.0,6.0), (8.0,8.5), (9.0,9,5), (10.0,11.0)] #
# Use the square pulse function and specify which node in the input layer gets which pulse
layers[0].set_input_func(channel='blue',func_handle=physics.square_pulse, func_args=(t_blue, 3.0*Imax))
# Use the costant function to specify the inhibition from I0 to H0
#layers[0].set_input_func(channel='red', func_handle=physics.constant, func_args=(I_red,))
###Output
_____no_output_____
###Markdown
6. Evolve in time
###Code
# Start time t, end time T
t = 0.0
T = 40.0 # ns
# To sample result over a fixed time-step, use savetime
savestep = 0.1
savetime = savestep
# These parameters are used to determine an appropriate time step each update
dtmax = 0.1 # ns
dVmax = 0.005 # V
nw.reset(layers)
# Create a log over the dynamic data
time_log = logger.Logger(layers,channels) # might need some flags
start = time.time()
while t < T:
# evolve by calculating derivatives, provides dt
dt = tm.evolve(t, layers, dVmax, dtmax )
# update with explicit Euler using dt
# supplying the unity_coeff here to scale the weights
tm.update(dt, t, layers, weights, unity_coeff)
t += dt
# Log the progress
if t > savetime :
# Put log update here to have (more or less) fixed sample rate
# Now this is only to check progress
print(f'Time at t={t} ns')
savetime += savestep
time_log.add_tstep(t, layers, unity_coeff)
end = time.time()
print('Time used:',end-start)
# This is a large pandas data frame of all system variables
result = time_log.get_timelog()
###Output
Time at t=0.2 ns
Time at t=0.30000000000000004 ns
Time at t=0.4 ns
Time at t=0.5 ns
Time at t=0.6 ns
Time at t=0.7 ns
Time at t=0.7999999999999999 ns
Time at t=0.8999999999999999 ns
Time at t=0.9999999999999999 ns
Time at t=1.0999999999999999 ns
Time at t=1.2 ns
Time at t=1.3 ns
Time at t=1.4000000000000001 ns
Time at t=1.5000000000000002 ns
Time at t=1.6000000000000003 ns
Time at t=1.7000000000000004 ns
Time at t=1.8000000000000005 ns
Time at t=1.9000000000000006 ns
Time at t=2.0000000000000004 ns
Time at t=2.1000000000000005 ns
Time at t=2.2000000000000006 ns
Time at t=2.3000000000000007 ns
Time at t=2.400000000000001 ns
Time at t=2.500000000000001 ns
Time at t=2.600000000000001 ns
Time at t=2.700000000000001 ns
Time at t=2.800000000000001 ns
Time at t=2.9000000000000012 ns
Time at t=2.9932985853645895 ns
Time at t=3.069677086379205 ns
Time at t=3.1632051547050346 ns
Time at t=3.2394339145345934 ns
Time at t=3.3331829833997935 ns
Time at t=3.409268793108564 ns
Time at t=3.5032306505126303 ns
Time at t=3.6733466687325467 ns
Time at t=3.7491656577261843 ns
Time at t=3.8435294967649014 ns
Time at t=3.9192240136864362 ns
Time at t=4.0137775503440745 ns
Time at t=4.184089212022732 ns
Time at t=4.259551277745405 ns
Time at t=4.354462839838677 ns
Time at t=4.429816416761121 ns
Time at t=4.524896774935735 ns
Time at t=4.600146703587488 ns
Time at t=4.770540246933095 ns
Time at t=4.865938885951116 ns
Time at t=4.940995162917265 ns
Time at t=5.036543714740187 ns
Time at t=5.111509579385647 ns
Time at t=5.201348818843811 ns
Time at t=5.302162620769983 ns
Time at t=5.403177119101009 ns
Time at t=5.5030139330246035 ns
Time at t=5.604655430241605 ns
Time at t=5.704523673851205 ns
Time at t=5.810556158804089 ns
Time at t=5.910598891068774 ns
Time at t=6.0028368189486185 ns
Time at t=6.100395583554497 ns
Time at t=6.201538566159327 ns
Time at t=6.3002757799384925 ns
Time at t=6.416178723080911 ns
Time at t=6.507801537256503 ns
Time at t=6.620772620995854 ns
Time at t=6.70897337026975 ns
Time at t=6.807122603197935 ns
Time at t=6.914098754488337 ns
Time at t=7.02847949411232 ns
Time at t=7.148990499908898 ns
Time at t=7.2112528149192014 ns
Time at t=7.339444588860624 ns
Time at t=7.405306330592333 ns
Time at t=7.5405281223270295 ns
Time at t=7.609915481860471 ns
Time at t=7.752382265633797 ns
Time at t=7.825538113480659 ns
Time at t=7.900035271898306 ns
Time at t=8.053258244258743 ns
Time at t=8.132095957273908 ns
Time at t=8.20124348856195 ns
Time at t=8.30458121385817 ns
Time at t=8.403525671791504 ns
Time at t=8.501832567079152 ns
Time at t=8.600048727324836 ns
Time at t=8.702003546693996 ns
Time at t=8.815145351188852 ns
Time at t=8.920029880556589 ns
Time at t=9.021003451734906 ns
Time at t=9.101045258589679 ns
Time at t=9.240979068401572 ns
Time at t=9.346435481114703 ns
Time at t=9.402293585809243 ns
Time at t=9.519311797996707 ns
Time at t=9.642347938150424 ns
Time at t=9.705839458306404 ns
Time at t=9.836460308581497 ns
Time at t=9.903538879876175 ns
Time at t=10.041239049347645 ns
Time at t=10.111902004541976 ns
Time at t=10.203636595121951 ns
Time at t=10.300351741035032 ns
Time at t=10.403736313557106 ns
Time at t=10.506347534881833 ns
Time at t=10.600287173656733 ns
Time at t=10.702306501029222 ns
Time at t=10.806173770393142 ns
Time at t=10.903222102164033 ns
Time at t=11.00549405270174 ns
Time at t=11.101122857813909 ns
Time at t=11.202171429310829 ns
Time at t=11.300294419339608 ns
Time at t=11.40032109266566 ns
Time at t=11.50006542522632 ns
Time at t=11.600665682793323 ns
Time at t=11.70209995221427 ns
Time at t=11.803925900563968 ns
Time at t=11.901127515013828 ns
Time at t=12.002574842238085 ns
Time at t=12.103298973234802 ns
Time at t=12.203219762851049 ns
Time at t=12.302363048952223 ns
Time at t=12.400821882756368 ns
Time at t=12.502153836168903 ns
Time at t=12.602834619917127 ns
Time at t=12.703091616009097 ns
Time at t=12.800100675774694 ns
Time at t=12.90039727307709 ns
Time at t=13.00111349154734 ns
Time at t=13.102575040290764 ns
Time at t=13.202286510056652 ns
Time at t=13.300682157806195 ns
Time at t=13.40091401840688 ns
Time at t=13.500516563813258 ns
Time at t=13.602573844236463 ns
Time at t=13.701878000725362 ns
Time at t=13.80147625568254 ns
Time at t=13.901648395827802 ns
Time at t=14.00267608690152 ns
Time at t=14.101994964844783 ns
Time at t=14.202684543325798 ns
Time at t=14.302106410314718 ns
Time at t=14.400427572800048 ns
Time at t=14.500884310668791 ns
Time at t=14.60070705389932 ns
Time at t=14.700075807954674 ns
Time at t=14.802526654138756 ns
Time at t=14.901634856477395 ns
Time at t=15.000828162116946 ns
Time at t=15.100282511860785 ns
Time at t=15.200181842876711 ns
Time at t=15.300725330721503 ns
Time at t=15.402136598571698 ns
Time at t=15.50009500355033 ns
Time at t=15.603788206009764 ns
Time at t=15.704077179810366 ns
Time at t=15.800330648968096 ns
Time at t=15.903611963693088 ns
Time at t=16.002824132340642 ns
Time at t=16.103955887142888 ns
Time at t=16.207639996439152 ns
Time at t=16.30610946414373 ns
Time at t=16.407157862539226 ns
Time at t=16.500740064167047 ns
Time at t=16.609136545147283 ns
Time at t=16.709908764539673 ns
Time at t=16.81692241631318 ns
Time at t=16.90923353602746 ns
Time at t=17.004334335140186 ns
Time at t=17.102541713870902 ns
Time at t=17.21452246276732 ns
Time at t=17.329730364488807 ns
Time at t=17.43914342047077 ns
Time at t=17.526132810809827 ns
Time at t=17.60245967166065 ns
Time at t=17.707104419085397 ns
Time at t=17.80591837618534 ns
Time at t=17.92796178550773 ns
Time at t=18.007488022722157 ns
Time at t=18.102192711868515 ns
Time at t=18.207610785725862 ns
Time at t=18.302819974238595 ns
Time at t=18.407513714021853 ns
Time at t=18.504690691448893 ns
Time at t=18.611302981426533 ns
Time at t=18.713023735633733 ns
Time at t=18.812435869443537 ns
Time at t=18.911550006008625 ns
Time at t=19.011820587728927 ns
Time at t=19.114327080176476 ns
Time at t=19.204597096962257 ns
Time at t=19.313294789073527 ns
Time at t=19.409743620295167 ns
Time at t=19.50951935344422 ns
Time at t=19.612880320018117 ns
Time at t=19.701936060321806 ns
Time at t=19.81253860565706 ns
Time at t=19.9080179080201 ns
Time at t=20.006708664625 ns
Time at t=20.108827497028432 ns
Time at t=20.21462320079944 ns
Time at t=20.302098742146452 ns
Time at t=20.415267568029872 ns
Time at t=20.50909687221071 ns
Time at t=20.606092606942777 ns
Time at t=20.70650683261069 ns
Time at t=20.810627654227357 ns
Time at t=20.91878621933035 ns
Time at t=21.002783133988814 ns
Time at t=21.11896741955832 ns
Time at t=21.209522329983223 ns
Time at t=21.30327659534009 ns
Time at t=21.40051169124276 ns
Time at t=21.501551765310868 ns
Time at t=21.606773153426488 ns
Time at t=21.716616818908214 ns
Time at t=21.83160488813366 ns
Time at t=21.911425464722583 ns
Time at t=22.036442819179136 ns
Time at t=22.123688415716572 ns
Time at t=22.214409647191488 ns
Time at t=22.308969235790784 ns
Time at t=22.407796353002546 ns
Time at t=22.51140493848577 ns
Time at t=22.62041906176233 ns
Time at t=22.735608945148158 ns
Time at t=22.85794369076113 ns
Time at t=22.922166467936968 ns
Time at t=23.057678322928474 ns
Time at t=23.129445251415216 ns
Time at t=23.204219405268386 ns
Time at t=23.302583685824146 ns
Time at t=23.401648179500217 ns
Time at t=23.50924664336775 ns
Time at t=23.61022410628681 ns
Time at t=23.708810121095638 ns
Time at t=23.801618352638375 ns
Time at t=23.910886838375447 ns
Time at t=24.011681973631415 ns
Time at t=24.10535821978813 ns
Time at t=24.208650349531922 ns
Time at t=24.31397837353157 ns
Time at t=24.416873998557545 ns
Time at t=24.523339970231827 ns
Time at t=24.62086291984297 ns
Time at t=24.735298629859496 ns
Time at t=24.821233502673916 ns
Time at t=24.914807974132188 ns
Time at t=25.01593786166998 ns
Time at t=25.12469384328062 ns
Time at t=25.241430264458955 ns
Time at t=25.30300410986546 ns
Time at t=25.433241074424217 ns
Time at t=25.502280923253394 ns
Time at t=25.64948548231879 ns
Time at t=25.72834244433619 ns
Time at t=25.81130158556217 ns
Time at t=25.99205651811261 ns
Time at t=26.0915934843205 ns
Time at t=26.191593484320503 ns
Time at t=26.291593484320504 ns
Time at t=26.391593484320506 ns
Time at t=26.491593484320507 ns
Time at t=26.59159348432051 ns
Time at t=26.69159348432051 ns
Time at t=26.79159348432051 ns
Time at t=26.891593484320513 ns
Time at t=26.991593484320514 ns
Time at t=27.091593484320516 ns
Time at t=27.191593484320517 ns
Time at t=27.29159348432052 ns
Time at t=27.39159348432052 ns
Time at t=27.49159348432052 ns
Time at t=27.591593484320523 ns
Time at t=27.691593484320524 ns
Time at t=27.791593484320526 ns
Time at t=27.891593484320527 ns
Time at t=27.99159348432053 ns
Time at t=28.09159348432053 ns
Time at t=28.19159348432053 ns
Time at t=28.291593484320533 ns
Time at t=28.391593484320534 ns
Time at t=28.480171904533513 ns
Time at t=28.580171904533515 ns
Time at t=28.642985296500264 ns
Time at t=28.742985296500265 ns
Time at t=28.8218142425506 ns
Time at t=28.921814242550603 ns
Time at t=29.089253987987025 ns
Time at t=29.162242409683973 ns
Time at t=29.262242409683974 ns
Time at t=29.33236562574169 ns
Time at t=29.432365625741692 ns
Time at t=29.503546629874112 ns
Time at t=29.601605171282113 ns
Time at t=29.76821239069204 ns
Time at t=29.8420715328475 ns
Time at t=29.93369164330711 ns
Time at t=30.009051774225085 ns
Time at t=30.101856901490176 ns
Time at t=30.200388576097275 ns
Time at t=30.305298961702125 ns
Time at t=30.402615847530882 ns
Time at t=30.505051415920683 ns
Time at t=30.608907554323885 ns
Time at t=30.701303946885208 ns
Time at t=30.808115932822364 ns
Time at t=30.909244742937872 ns
Time at t=31.003150945467947 ns
Time at t=31.107577799344465 ns
Time at t=31.212769985588917 ns
Time at t=31.316220271923378 ns
Time at t=31.40763995912644 ns
Time at t=31.54386711337805 ns
Time at t=31.6063183599072 ns
Time at t=31.773989058895914 ns
Time at t=31.873989058895916 ns
Time at t=31.973989058895917 ns
Time at t=32.07398905889592 ns
Time at t=32.17398905889592 ns
Time at t=32.27398905889592 ns
Time at t=32.37398905889592 ns
Time at t=32.473989058895924 ns
Time at t=32.573989058895926 ns
Time at t=32.67398905889593 ns
Time at t=32.77398905889593 ns
Time at t=32.87398905889593 ns
Time at t=32.97398905889593 ns
Time at t=33.07398905889593 ns
Time at t=33.173989058895934 ns
Time at t=33.273989058895936 ns
Time at t=33.37398905889594 ns
Time at t=33.47398905889594 ns
Time at t=33.57398905889594 ns
Time at t=33.67398905889594 ns
Time at t=33.77398905889594 ns
Time at t=33.873989058895944 ns
Time at t=33.973989058895945 ns
Time at t=34.07398905889595 ns
Time at t=34.17398905889595 ns
Time at t=34.27398905889595 ns
Time at t=34.37398905889595 ns
Time at t=34.47398905889595 ns
Time at t=34.573989058895954 ns
Time at t=34.673989058895955 ns
Time at t=34.77398905889596 ns
Time at t=34.87398905889596 ns
Time at t=34.97398905889596 ns
Time at t=35.07398905889596 ns
Time at t=35.17398905889596 ns
Time at t=35.273989058895964 ns
Time at t=35.360994518190836 ns
Time at t=35.436615059331594 ns
Time at t=35.536615059331595 ns
Time at t=35.605575212449075 ns
Time at t=35.70557521244908 ns
Time at t=35.88041589219828 ns
Time at t=35.95059733414381 ns
Time at t=36.05059733414381 ns
Time at t=36.126529802139316 ns
Time at t=36.22547840120382 ns
Time at t=36.39519949041831 ns
Time at t=36.47185432396675 ns
Time at t=36.56883899003453 ns
Time at t=36.6401555333616 ns
Time at t=36.7401555333616 ns
Time at t=36.81435896744503 ns
Time at t=36.91435896744503 ns
Time at t=37.08473125248539 ns
Time at t=37.160345828089135 ns
Time at t=37.25782643748542 ns
Time at t=37.3296223516608 ns
Time at t=37.429622351660804 ns
Time at t=37.50311672313848 ns
Time at t=37.603116723138484 ns
Time at t=37.774327922459435 ns
Time at t=37.848661169698055 ns
Time at t=37.947533465198674 ns
Time at t=38.01907178358332 ns
Time at t=38.11907178358332 ns
Time at t=38.292359813984994 ns
Time at t=38.363787815005544 ns
Time at t=38.463787815005546 ns
Time at t=38.537795256324834 ns
Time at t=38.63671608507963 ns
Time at t=38.7085820274562 ns
Time at t=38.808582027456204 ns
Time at t=38.98181496941636 ns
Time at t=39.05331920737169 ns
Time at t=39.153319207371695 ns
Time at t=39.227212475913994 ns
Time at t=39.325967948129176 ns
Time at t=39.49817870087482 ns
Time at t=39.5710653367774 ns
Time at t=39.6710653367774 ns
Time at t=39.73906898834755 ns
Time at t=39.83396628996239 ns
Time at t=39.901970970804506 ns
Time at t=40.06487196201386 ns
Time used: 0.9709725379943848
###Markdown
7. Visualize resultsPlot results specific to certain nodes
###Code
nodes = ['H0','K0','L0']
plotter.plot_nodes(result, nodes, onecolumn=True)
###Output
_____no_output_____
###Markdown
For this system it's quite elegant to use the `plot_chainlist` function, taking as arguments a graph object, the source node (I1 for blue) and a target node (O1 for blue)
###Code
# Variable G contains a graph object descibing the network
G = plotter.retrieve_G(layers, weights)
#plotter.plot_chainlist(result,G,'I1','L0')
plotter.plot_chainlist(result,G,'I0','K0')
plotter.plot_chainlist(result,G,'I0','L0')
###Output
_____no_output_____
###Markdown
Plot specific attributes
###Code
attr_list = ['Vgate']
plotter.plot_attributes(result, attr_list)
###Output
_____no_output_____
###Markdown
We can be totally specific if we want. First we list the available columns to choose from
###Code
print(result.columns)
###Output
Index(['Time', 'I0-Pout-blue', 'I1-Pout-red', 'H0-Vinh', 'H0-Vexc', 'H0-Vgate',
'H0-Iinh', 'H0-Iexc', 'H0-Iout', 'H0-ISD', 'H0-Pout', 'K0-Vinh',
'K0-Vexc', 'K0-Vgate', 'K0-Iinh', 'K0-Iexc', 'K0-Iout', 'K0-ISD',
'K0-Pout', 'L0-Vinh', 'L0-Vexc', 'L0-Vgate', 'L0-Iinh', 'L0-Iexc',
'L0-Iout', 'L0-ISD', 'L0-Pout', 'O0-Pout-blue', 'O1-Pout-red'],
dtype='object')
|
homeworks/hw0/hw_0.ipynb | ###Markdown
Домашнее задание № 0
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Самостоятельное написание дерева решенийИсточник: [mlcourse.ai](https://mlcourse.ai) от [Юрия Кашницкого](https://yorko.github.io) и [OpenDataScience](https://ods.ai) Рассмотрим следующую одномерную задачу восстановления регрессии. Неформально, нужно построить функцию $a(x)$, приближающую искомую зависимость $y = f(x)$ в терминах среднеквадратичной ошибки: $min \sum_i {(a(x_i) - f(x_i))}^2$.
###Code
X = np.linspace(-2, 2, 7)
y = X ** 3
plt.scatter(X, y)
plt.xlabel(r'$x$')
plt.plot(np.linspace(-2,2,50), np.linspace(np.mean(y),np.mean(y),50))
plt.ylabel(r'$y$');
###Output
_____no_output_____
###Markdown
Проделаем несколько шагов в построении дерева решений. Исходя из соображений симметрии, выберем пороги для разбиения равными соответственно 0, 1.5 и -1.5. Напомним, что в случае задачи восстановления регрессии листовая вершина выдает среднее значение ответа по всем объектам обучающей выборки, попавшим в эту вершину.Итак, начнём. Дерево глубины 0 состоит из одного корня, который содержит всю обучающую выборку. Как будут выглядеть предсказания данного дерева для $x \in [-2, 2]$? Постройте соответствующий график.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
Произведем первое разбиение выборки по предикату $[x < 0]$. Получим дерево глубины 1 с двумя листьями. Постройте аналогичный график предсказаний для этого дерева.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
В алгоритме построения дерева решений признак и значение порога, по которым происходит разбиение выборки, выбираются исходя из некоторого критерия. Для регрессии обычно используется дисперсионный критерий: $$Q(X, j, t) = D(X) - \dfrac{|X_l|}{|X|} D(X_l) - \dfrac{|X_r|}{|X|} D(X_r),$$ где $X$ – выборка, находящаяся в текущей вершине, $X_l$ и $X_r$ – разбиение выборки $X$ на две части по предикату $[xj < t]$ (то есть по $j$-ому признаку и порогу $t$), а $D(X)$ – дисперсия ответов на выборке $X$: $$D(X) = \dfrac{1}{|X|} \sum{x_j \in X}(yj – \dfrac{1}{|X|}\sum{x_i \in X}y_i)^2,$$ где $y_i = y(x_i)$ – ответ на объекте $x_i$. При каждом разбиении вершины выбираются признак $j$ и значение порога $t$, максимизирующие значение функционала $Q(X, j, t)$.В нашем случае признак всего один, поэтому $Q$ зависит только от значения порога $t$ (и ответов выборки в данной вершине).Постройте график функции $Q(X, t)$ в корне в зависимости от значения порога $t$ на отрезке $[-1.9, 1.9]$.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
А теперь на основе значений полученной функции постройте дерево глубины 1.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
Домашнее задание № 0
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Самостоятельное написание дерева решенийИсточник: [mlcourse.ai](https://mlcourse.ai) от [Юрия Кашницкого](https://yorko.github.io) и [OpenDataScience](https://ods.ai) Рассмотрим следующую одномерную задачу восстановления регрессии. Неформально, нужно построить функцию $a(x)$, приближающую искомую зависимость $y = f(x)$ в терминах среднеквадратичной ошибки: $min \sum_i {(a(x_i) - f(x_i))}^2$.
###Code
X = np.linspace(-2, 2, 7)
y = X ** 3
plt.scatter(X, y)
plt.xlabel(r'$x$')
plt.plot(np.linspace(-2,2,50), np.linspace(np.mean(y),np.mean(y),50))
plt.ylabel(r'$y$');
###Output
_____no_output_____
###Markdown
Проделаем несколько шагов в построении дерева решений. Исходя из соображений симметрии, выберем пороги для разбиения равными соответственно 0, 1.5 и -1.5. Напомним, что в случае задачи восстановления регрессии листовая вершина выдает среднее значение ответа по всем объектам обучающей выборки, попавшим в эту вершину.Итак, начнём. Дерево глубины 0 состоит из одного корня, который содержит всю обучающую выборку. Как будут выглядеть предсказания данного дерева для $x \in [-2, 2]$? Постройте соответствующий график.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
Произведем первое разбиение выборки по предикату $[x < 0]$. Получим дерево глубины 1 с двумя листьями. Постройте аналогичный график предсказаний для этого дерева.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
В алгоритме построения дерева решений признак и значение порога, по которым происходит разбиение выборки, выбираются исходя из некоторого критерия. Для регрессии обычно используется дисперсионный критерий: $$Q(X, j, t) = D(X) - \dfrac{|X_l|}{|X|} D(X_l) - \dfrac{|X_r|}{|X|} D(X_r),$$ где $X$ – выборка, находящаяся в текущей вершине, $X_l$ и $X_r$ – разбиение выборки $X$ на две части по предикату $[xj < t]$ (то есть по $j$-ому признаку и порогу $t$), а $D(X)$ – дисперсия ответов на выборке $X$: $$D(X) = \dfrac{1}{|X|} \sum\limits_{{x_j \in X}}[y_j – \dfrac{1}{|X|}\sum\limits_{{x_j \in X}}y_i]^2,$$ где $y_i = y(x_i)$ – ответ на объекте $x_i$. При каждом разбиении вершины выбираются признак $j$ и значение порога $t$, максимизирующие значение функционала $Q(X, j, t)$.В нашем случае признак всего один, поэтому $Q$ зависит только от значения порога $t$ (и ответов выборки в данной вершине).Постройте график функции $Q(X, t)$ в корне в зависимости от значения порога $t$ на отрезке $[-1.9, 1.9]$.
###Code
# Ваш Код здесь
###Output
_____no_output_____
###Markdown
А теперь на основе значений полученной функции постройте дерево глубины 1.
###Code
# Ваш Код здесь
###Output
_____no_output_____ |
HW3-2.ipynb | ###Markdown
Using Python for Research Homework: Week 3, Case Study 2In this case study, we will find and plot the distribution of word frequencies for each translation of Hamlet. Perhaps the distribution of word frequencies of Hamlet depends on the translation --- let's find out!
###Code
# DO NOT EDIT THIS CODE!
import os
import pandas as pd
import numpy as np
from collections import Counter
def count_words_fast(text):
text = text.lower()
skips = [".", ",", ";", ":", "'", '"', "\n", "!", "?", "(", ")"]
for ch in skips:
text = text.replace(ch, "")
word_counts = Counter(text.split(" "))
return word_counts
def word_stats(word_counts):
num_unique = len(word_counts)
counts = word_counts.values()
return (num_unique, counts)
###Output
_____no_output_____
###Markdown
Exercise 1 In this case study, we will find and visualize summary statistics of the text of different translations of Hamlet. For this case study, functions `count_words_fast` and `word_stats` are already defined as in the Case 2 Videos (Videos 3.2.x). Instructions - Read in the data as a pandas dataframe using `pd.read_csv`. Use the `index_col` argument to set the first column in the csv file as the index for the dataframe. The data can be found at https://courses.edx.org/asset-v1:HarvardX+PH526x+2T2019+type@[email protected]
###Code
hamlets = pd.read_csv('hamlets.csv',index_col=0) ## Complete this line of code! ##
len(hamlets)
###Output
_____no_output_____
###Markdown
Exercise 2 In this exercise, we will summarize the text for a single translation of Hamlet in a `pandas` dataframe. Instructions- Find the dictionary of word frequency in `text` by calling `count_words_fast()`. Store this as `counted_text`.- Create a `pandas` dataframe named `data`.- Using `counted_text`, define two columns in data: - `word`, consisting of each unique word in text. - `count`, consisting of the number of times each word in `word` is included in the text.
###Code
language, text = hamlets.iloc[0]
# Enter your code here.
counted_text = count_words_fast(text)
data = pd.DataFrame(counted_text.items(),columns=["word","count"])
data[data.word=="hamlet"]
###Output
_____no_output_____
###Markdown
Exercise 3In this exercise, we will continue to define summary statistics for a single translation of Hamlet. Instructions- Add a column to data named `length`, defined as the length of each word.- Add another column named `frequency`, which is defined as follows for each word in `data`: - If `count > 10`, `frequency` is "frequent". - If `1 < count <= 10`, `frequency` is "infrequent". - If `count == 1`, `frequency` is "unique".
###Code
# write your code here!
data['length'] = data.word.map(len)
data.loc[data['count']>=10,'frequency'] = "frequent"
data.loc[(data['count']<10) & (data['count'] >1),'frequency'] = "infrequent"
data.loc[data['count']<=1,'frequency'] = "unique"
len(data.loc[data.frequency=="unique"])
###Output
_____no_output_____
###Markdown
Exercise 4In this exercise, we will summarize the statistics in data into a smaller pandas dataframe. Instructions - Create a `pandas` dataframe named `sub_data` including the following columns: - `language`, which is the language of the text (defined in Exercise 2). - `frequency`, which is a list containing the strings "frequent", "infrequent", and "unique". - `mean_word_length`, which is the mean word length of each value in frequency. - `num_words`, which is the total number of words in each frequency category.
###Code
# write your code here!
sub_data = pd.DataFrame(columns=("language","frequency","mean_word_length","num_words"))
idx = 0
for frequency in ["frequent", "infrequent", "unique"]:
mean_word_length = data.loc[data.frequency==frequency,'length'].mean()
num_words = data.loc[data.frequency==frequency].shape[0]
sub_data.loc[idx] = language,frequency,mean_word_length,num_words
idx+=1
sub_data.loc[sub_data["frequency"]=="infrequent"]["mean_word_length"]
###Output
_____no_output_____
###Markdown
Exercise 5In this exercise, we will join all the data summaries for text Hamlet translation. Instructions - The previous code for summarizing a particular translation of Hamlet is consolidated into a single function called `summarize_text`. Create a pandas dataframe` grouped_data` consisting of the results of `summarize_text` for each translation of Hamlet in `hamlets`. - Use a `for` loop across the row indices of `hamlets` to assign each translation to a new row. - Obtain the `ith` row of `hamlets` to variables using the `.iloc` method, and assign the output to variables `language` and `text`. - Call `summarize_text` using `language` and `text`, and assign the output to `sub_data`. - Use the pandas `.append()` function to append to pandas dataframes row-wise to `grouped_data`.
###Code
def summarize_text(language, text):
counted_text = count_words_fast(text)
data = pd.DataFrame({
"word": list(counted_text.keys()),
"count": list(counted_text.values())
})
data.loc[data["count"] > 10, "frequency"] = "frequent"
data.loc[data["count"] <= 10, "frequency"] = "infrequent"
data.loc[data["count"] == 1, "frequency"] = "unique"
data["length"] = data["word"].apply(len)
sub_data = pd.DataFrame({
"language": language,
"frequency": ["frequent","infrequent","unique"],
"mean_word_length": data.groupby(by = "frequency")["length"].mean(),
"num_words": data.groupby(by = "frequency").size()
})
return(sub_data)
# write your code here!
grouped_data =pd.DataFrame(columns=("language","frequency","mean_word_length","num_words"))
for i in range(hamlets.shape[0]):
language, text = hamlets.iloc[i]
sub_data=summarize_text(language, text)
grouped_data=grouped_data.append(sub_data,ignore_index=True)
grouped_data[(grouped_data.language=="German") & (grouped_data.frequency=="frequent")]
grouped_data[(grouped_data.language=="Portuguese") & (grouped_data.frequency=="frequent")]
grouped_data
###Output
_____no_output_____
###Markdown
Exercise 6In this exercise, we will plot our results and look for differences across each translation. Instructions - Plot the word statistics of each translations on a single plot. Note that we have already done most of the work for you.- Consider: do the word statistics differ by translation?
###Code
colors = {"Portuguese": "green", "English": "blue", "German": "red"}
markers = {"frequent": "o","infrequent": "s", "unique": "^"}
import matplotlib.pyplot as plt
for i in range(grouped_data.shape[0]):
row = grouped_data.iloc[i]
plt.plot(row.mean_word_length, row.num_words,
marker=markers[row.frequency],
color = colors[row.language],
markersize = 10
)
color_legend = []
marker_legend = []
for color in colors:
color_legend.append(
plt.plot([], [],
color=colors[color],
marker="o",
label = color, markersize = 10, linestyle="None")
)
for marker in markers:
marker_legend.append(
plt.plot([], [],
color="k",
marker=markers[marker],
label = marker, markersize = 10, linestyle="None")
)
plt.legend(numpoints=3, loc = "upper left")
plt.xlabel("Mean Word Length")
plt.ylabel("Number of Words")
# write your code to display the plot here!
plt.show()
###Output
_____no_output_____
###Markdown
Using Python for Research Homework: Week 3, Case Study 2In this case study, we will find and plot the distribution of word frequencies for each translation of Hamlet. Perhaps the distribution of word frequencies of Hamlet depends on the translation --- let's find out!
###Code
# DO NOT EDIT THIS CODE!
import os
import pandas as pd
import numpy as np
from collections import Counter
def count_words_fast(text):
text = text.lower()
skips = [".", ",", ";", ":", "'", '"', "\n", "!", "?", "(", ")"]
for ch in skips:
text = text.replace(ch, "")
word_counts = Counter(text.split(" "))
return word_counts
def word_stats(word_counts):
num_unique = len(word_counts)
counts = word_counts.values()
return (num_unique, counts)
###Output
_____no_output_____
###Markdown
Exercise 1 In this case study, we will find and visualize summary statistics of the text of different translations of Hamlet. For this case study, functions `count_words_fast` and `word_stats` are already defined as in the Case 2 Videos (Videos 3.2.x). Instructions - Read in the data as a pandas dataframe using `pd.read_csv`. Use the `index_col` argument to set the first column in the csv file as the index for the dataframe. The data can be found at https://courses.edx.org/asset-v1:HarvardX+PH526x+2T2019+type@[email protected]
###Code
import pandas as pd
hamlets = pd.read_csv("hamlets.csv",index_col = 0)
print(hamlets)
###Output
language text
1 English The Tragedie of Hamlet\n ...
2 German Hamlet, Prinz von Dännemark.\n ...
3 Portuguese HAMLET\n DRAMA EM ...
###Markdown
Exercise 2 In this exercise, we will summarize the text for a single translation of Hamlet in a `pandas` dataframe. Instructions- Find the dictionary of word frequency in `text` by calling `count_words_fast()`. Store this as `counted_text`.- Create a `pandas` dataframe named `data`.- Using `counted_text`, define two columns in data: - `word`, consisting of each unique word in text.import osimport pandas as pdimport numpy as npfrom collections import Counter - `count`, consisting of the number of times each word in `word` is included in the text.
###Code
import os
import pandas as pd
import numpy as np
from collections import Counter
language, text = hamlets.iloc[0]
def count_words_fast(text):
text = text.lower()
skips = [".", ",", ";", ":", "'", '"', "\n", "!", "?", "(", ")"]
for ch in skips:
text = text.replace(ch, "")
word_counts = Counter(text.split(" "))
return word_counts
# Enter your code here.
counted_text = count_words_fast(text)
data = pd.DataFrame({"word":list(counted_text.keys()),"count":list(counted_text.values())})
data
###Output
_____no_output_____
###Markdown
Exercise 3In this exercise, we will continue to define summary statistics for a single translation of Hamlet. Instructions- Add a column to data named `length`, defined as the length of each word.- Add another column named `frequency`, which is defined as follows for each word in `data`: - If `count > 10`, `frequency` is "frequent". - If `1 < count <= 10`, `frequency` is "infrequent". - If `count == 1`, `frequency` is "unique".
###Code
# write your code here!
def condition_length(a):
return len(a['word'])
def conditions(s):
if (s['count'] > 10):
return "frequent"
elif (1< s['count'] <= 10):
return "infrequent"
else:
return "unique"
data['length'] = data.apply(condition_length,axis=1)
data['frequency'] = data.apply(conditions,axis=1)
len(data[data['frequency'] == 'unique'])
###Output
_____no_output_____
###Markdown
Exercise 4In this exercise, we will summarize the statistics in data into a smaller pandas dataframe. Instructions - Create a `pandas` dataframe named `sub_data` including the following columns: - `language`, which is the language of the text (defined in Exercise 2). - `frequency`, which is a list containing the strings "frequent", "infrequent", and "unique". - `mean_word_length`, which is the mean word length of each value in frequency. - `num_words`, which is the total number of words in each frequency category.
###Code
# write your code here!
sub_data = pd.DataFrame({'language':language,'frequency':['frequent','infrequent','unique'],'mean_word_length':data.groupby(by ="frequency")['length'].mean(),'num_words':data.groupby(by="frequency")['count'].size()})
print(sub_data)
###Output
language frequency mean_word_length num_words
frequency
frequent English frequent 4.371517 323
infrequent English infrequent 5.825243 1442
unique English unique 7.005675 3348
###Markdown
Exercise 5In this exercise, we will join all the data summaries for text Hamlet translation. Instructions - The previous code for summarizing a particular translation of Hamlet is consolidated into a single function called `summarize_text`. Create a pandas dataframe` grouped_data` consisting of the results of `summarize_text` for each translation of Hamlet in `hamlets`. - Use a `for` loop across the row indices of `hamlets` to assign each translation to a new row. - Obtain the `ith` row of `hamlets` to variables using the `.iloc` method, and assign the output to variables `language` and `text`. - Call `summarize_text` using `language` and `text`, and assign the output to `sub_data`. - Use the pandas `.append()` function to append to pandas dataframes row-wise to `grouped_data`.
###Code
def summarize_text(language, text):
counted_text = count_words_fast(text)
data = pd.DataFrame({
"word": list(counted_text.keys()),
"count": list(counted_text.values())
})
data.loc[data["count"] > 10, "frequency"] = "frequent"
data.loc[data["count"] <= 10, "frequency"] = "infrequent"
data.loc[data["count"] == 1, "frequency"] = "unique"
data["length"] = data["word"].apply(len)
sub_data = pd.DataFrame({
"language": language,
"frequency": ["frequent","infrequent","unique"],
"mean_word_length": data.groupby(by = "frequency")["length"].mean(),
"num_words": data.groupby(by = "frequency").size()
})
return(sub_data)
grouped_data = pd.DataFrame(columns = ["language", "frequency", "mean_word_length", "num_words"])
# write your code here!
for i in range(hamlets.shape[0]):
language,text = hamlets.iloc[i]
sub_data = summarize_text(language,text)
grouped_data = grouped_data.append(sub_data)
print(grouped_data)
###Output
language frequency mean_word_length num_words
frequent English frequent 4.371517 323
infrequent English infrequent 5.825243 1442
unique English unique 7.005675 3348
frequent German frequent 4.528053 303
infrequent German infrequent 6.481830 1596
unique German unique 9.006987 5582
frequent Portuguese frequent 4.417625 261
infrequent Portuguese infrequent 6.497870 1643
unique Portuguese unique 8.669778 5357
###Markdown
Exercise 6In this exercise, we will plot our results and look for differences across each translation. Instructions - Plot the word statistics of each translations on a single plot. Note that we have already done most of the work for you.- Consider: do the word statistics differ by translation?
###Code
colors = {"Portuguese": "green", "English": "blue", "German": "red"}
markers = {"frequent": "o","infrequent": "s", "unique": "^"}
import matplotlib.pyplot as plt
for i in range(grouped_data.shape[0]):
row = grouped_data.iloc[i]
plt.plot(row.mean_word_length, row.num_words,
marker=markers[row.frequency],
color = colors[row.language],
markersize = 10
)
color_legend = []
marker_legend = []
for color in colors:
color_legend.append(
plt.plot([], [],
color=colors[color],
marker="o",
label = color, markersize = 10, linestyle="None")
)
for marker in markers:
marker_legend.append(
plt.plot([], [],
color="k",
marker=markers[marker],
label = marker, markersize = 10, linestyle="None")
)
plt.legend(numpoints=1, loc = "upper left")
plt.xlabel("Mean Word Length")
plt.ylabel("Number of Words")
plt.show()
###Output
_____no_output_____
###Markdown
Using Python for Research Homework: Week 3, Case Study 2In this case study, we will find and plot the distribution of word frequencies for each translation of Hamlet. Perhaps the distribution of word frequencies of Hamlet depends on the translation --- let's find out!
###Code
# DO NOT EDIT THIS CODE!
import os
import pandas as pd
import numpy as np
from collections import Counter
def count_words_fast(text):
text = text.lower()
skips = [".", ",", ";", ":", "'", '"', "\n", "!", "?", "(", ")"]
for ch in skips:
text = text.replace(ch, "")
word_counts = Counter(text.split(" "))
return word_counts
def word_stats(word_counts):
num_unique = len(word_counts)
counts = word_counts.values()
return (num_unique, counts)
###Output
_____no_output_____
###Markdown
Exercise 1 In this case study, we will find and visualize summary statistics of the text of different translations of Hamlet. For this case study, functions `count_words_fast` and `word_stats` are already defined as in the Case 2 Videos (Videos 3.2.x). Instructions - Read in the data as a pandas dataframe using `pd.read_csv`. Use the `index_col` argument to set the first column in the csv file as the index for the dataframe. The data can be found at https://courses.edx.org/asset-v1:HarvardX+PH526x+2T2019+type@[email protected]
###Code
#hamlets = ## Complete this line of code! ##
hamlets = pd.read_csv("asset-v1_HarvardX+PH526x+2T2019+type@[email protected]", index_col=0)
hamlets
###Output
_____no_output_____
###Markdown
Exercise 2 In this exercise, we will summarize the text for a single translation of Hamlet in a `pandas` dataframe. Instructions- Find the dictionary of word frequency in `text` by calling `count_words_fast()`. Store this as `counted_text`.- Create a `pandas` dataframe named `data`.- Using `counted_text`, define two columns in data: - `word`, consisting of each unique word in text. - `count`, consisting of the number of times each word in `word` is included in the text.
###Code
language, text = hamlets.iloc[0]
# Enter your code here.
counted_text = count_words_fast(text)
# (text)
data =pd.DataFrame({
"word":list(counted_text.keys()),
"count":list(counted_text.values())
})
# counted_text.keys()
data[data['word']=='hamlet']
counted_text.keys()
counted_text.values()
sum(counted_text.values())
###Output
_____no_output_____
###Markdown
Exercise 3In this exercise, we will continue to define summary statistics for a single translation of Hamlet. Instructions- Add a column to data named `length`, defined as the length of each word.- Add another column named `frequency`, which is defined as follows for each word in `data`: - If `count > 10`, `frequency` is "frequent". - If `1 < count <= 10`, `frequency` is "infrequent". - If `count == 1`, `frequency` is "unique".
###Code
# write your code here!
# apply(function), Apply function to each object.
data['length'] = data.apply(lambda row: len(row['word']), axis=1)
def frequency(row):
if row['count']>10:
return 'frequency'
if 1<row['count']<=10:
return 'infrequent'
if row['count']==1:
return 'unique'
data['frequency'] = data.apply (lambda row: frequency(row), axis=1)
data['frequency'].value_counts()['unique']
# data.iloc[:, [1, 2, 5]]
# Use df.loc[] and df.iloc[] to select only rows, only columns or both.
# Use df.at[] and df.iat[] to access a single value by row and column.
# First index selects rows, second index columns.
# df.iloc[10:20], Select rows 10-20.
# df.iloc[:, [1, 2, 5]], Select columns in positions 1, 2 and 5 (first column is 0).
# df.loc[:, 'x2':'x4'], Select all columns between x2 and x4 (inclusive).
# df.loc[df['a'] > 10, ['a’, 'c']], Select rows meeting logical condition, and only the specific columns .
# df.iat[1, 2] Access single value by index
# df.at[4, 'A'] Access single value by label
data["length"] = data["word"].apply(len)
data.loc[data["count"] > 10, "frequency"] = "frequent"
data.loc[data["count"] <= 10, "frequency"] = "infrequent"
data.loc[data["count"] == 1, "frequency"] = "unique"
data.groupby('frequency').count()
# data
###Output
_____no_output_____
###Markdown
Exercise 4In this exercise, we will summarize the statistics in data into a smaller pandas dataframe. Instructions - Create a `pandas` dataframe named `sub_data` including the following columns: - `language`, which is the language of the text (defined in Exercise 2). - `frequency`, which is a list containing the strings "frequent", "infrequent", and "unique". - `mean_word_length`, which is the mean word length of each value in frequency. - `num_words`, which is the total number of words in each frequency category.
###Code
# write your code here!
sub_data = pd.DataFrame({'language':'English',
'frequency':["frequent", "infrequent", "unique"],
"mean_word_length": data.groupby(by = "frequency")["length"].mean(),
"num_words": data.groupby(by = "frequency").size()
})
sub_data
###Output
_____no_output_____
###Markdown
Exercise 5In this exercise, we will join all the data summaries for text Hamlet translation. Instructions - The previous code for summarizing a particular translation of Hamlet is consolidated into a single function called `summarize_text`. Create a pandas dataframe` grouped_data` consisting of the results of `summarize_text` for each translation of Hamlet in `hamlets`. - Use a `for` loop across the row indices of `hamlets` to assign each translation to a new row. - Obtain the `ith` row of `hamlets` to variables using the `.iloc` method, and assign the output to variables `language` and `text`. - Call `summarize_text` using `language` and `text`, and assign the output to `sub_data`. - Use the pandas `.append()` function to append to pandas dataframes row-wise to `grouped_data`.
###Code
def summarize_text(language, text):
counted_text = count_words_fast(text)
data = pd.DataFrame({
"word": list(counted_text.keys()),
"count": list(counted_text.values())
})
data.loc[data["count"] > 10, "frequency"] = "frequent"
data.loc[data["count"] <= 10, "frequency"] = "infrequent"
data.loc[data["count"] == 1, "frequency"] = "unique"
data["length"] = data["word"].apply(len)
sub_data = pd.DataFrame({
"language": language,
"frequency": ["frequent","infrequent","unique"],
"mean_word_length": data.groupby(by = "frequency")["length"].mean(),
"num_words": data.groupby(by = "frequency").size()
})
return(sub_data)
# write your code here!
# futureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
grouped_data = pd.DataFrame(columns = ["language", "frequency", "mean_word_length", "num_words"])
for row in range(len(hamlets)):
language, text = hamlets.iloc[row]
sub_data = summarize_text(language, text)
grouped_data = grouped_data.append(sub_data)
# grouped_data = pd.concat([sub_data])
grouped_data
grouped_data.shape
###Output
_____no_output_____
###Markdown
Exercise 6In this exercise, we will plot our results and look for differences across each translation. Instructions - Plot the word statistics of each translations on a single plot. Note that we have already done most of the work for you.- Consider: do the word statistics differ by translation?
###Code
colors = {"Portuguese": "green", "English": "blue", "German": "red"}
markers = {"frequent": "o","infrequent": "s", "unique": "^"}
import matplotlib.pyplot as plt
for i in range(grouped_data.shape[0]):
row = grouped_data.iloc[i]
plt.plot(row.mean_word_length, row.num_words,
marker=markers[row.frequency],
color = colors[row.language],
markersize = 10
)
color_legend = []
marker_legend = []
for color in colors:
color_legend.append(
plt.plot([], [],
color=colors[color],
marker="o",
label = color, markersize = 10, linestyle="None")
)
for marker in markers:
marker_legend.append(
plt.plot([], [],
color="k",
marker=markers[marker],
label = marker, markersize = 10, linestyle="None")
)
plt.legend(numpoints=1, loc = "upper left")
plt.xlabel("Mean Word Length")
plt.ylabel("Number of Words")
# write your code to display the plot here!
plt.show()
###Output
_____no_output_____ |
examples/notebooks/paper_plots.ipynb | ###Markdown
Examples in the paper
###Code
cd ../../examples/paper_examples
import paper_plots
# NNLS
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_nnls('../figures/nnls.pdf')
# NNLS: regularization effect
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_nnls_reg('../figures/reg_effect.pdf')
# Sparse Inverse Covariance Estimation
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_sparse_inv_covariance(50, 0.01, '../figures/sparse_inv_cov_est_small.pdf')
# Sparse Inverse Covariance Estimation
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_sparse_inv_covariance(100, 0.001, '../figures/sparse_inv_cov_est_large.pdf')
# l1 Trend Filtering
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_l1_trend_filtering('../figures/l1_trend_filter.pdf')
# Single Commodity Flow
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_commodity_flow('../figures/single_com_flow.pdf')
# Optimal Control
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_optimal_control('../figures/opt_cont.pdf')
# Coupled QP
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_coupled_qp('../figures/coupled_qp.pdf')
# Multi-task Regularized Logistic Regression
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_multi_task_logistic('../figures/multitask_reglog.pdf')
###Output
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 7.79
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = False, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 13000, constraints m = 8000
nnz(A) = 1513000
Setup time: 2.47e-01
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 2.88e+00 1.29e+00 2.58e+00 6.16e-01
100| 1.80e-01 1.54e-03 1.80e-01 3.51e+01
200| 8.73e-02 3.93e-04 8.73e-02 5.98e+01
300| 5.64e-02 1.76e-04 5.64e-02 8.61e+01
400| 4.09e-02 9.90e-05 4.09e-02 1.17e+02
500| 3.16e-02 6.33e-05 3.16e-02 1.47e+02
600| 2.54e-02 4.39e-05 2.54e-02 1.73e+02
700| 2.10e-02 3.21e-05 2.10e-02 2.01e+02
800| 1.78e-02 2.45e-05 1.78e-02 2.25e+02
900| 1.52e-02 1.93e-05 1.52e-02 2.47e+02
999| 1.33e-02 1.55e-05 1.33e-02 2.69e+02
----------------------------------------------------
Status: Reach maximum iterations
Solve time: 2.69e+02
Total number of iterations: 1000
Best total residual: 1.33e-02; reached at iteration 999
======================================================================
DRS finished.
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 7.79
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 13000, constraints m = 8000
nnz(A) = 1513000
Setup time: 3.04e-01
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 2.88e+00 1.29e+00 2.58e+00 7.21e-01
100| 2.19e-03 6.24e-05 2.18e-03 2.49e+01
200| 3.91e-04 2.01e-07 3.91e-04 4.53e+01
300| 7.09e-05 4.95e-08 7.09e-05 7.17e+01
400| 8.73e-06 5.24e-09 8.73e-06 9.85e+01
500| 1.47e-06 9.47e-10 1.47e-06 1.24e+02
532| 9.27e-07 3.64e-10 9.27e-07 1.30e+02
----------------------------------------------------
Status: Solved
Solve time: 1.30e+02
Total number of iterations: 533
Best total residual: 9.27e-07; reached at iteration 532
======================================================================
A2DR finished.
###Markdown
Slowness and inaccuracy of scipy.optimize.nnls on NNLS
###Code
import numpy as np
from scipy import sparse
from scipy.optimize import nnls
import time
np.random.seed(1)
m, n = 10000, 8000
density = 0.001
X = sparse.random(m, n, density=density, data_rvs=np.random.randn)
y = np.random.randn(m)
# Solve with scipy.optimize.nnls
t0 = time.time()
res = nnls(sparse.csr_matrix(X).todense(),y)
t1 = time.time()
print('run time = {}'.format(t1-t0))
print('constraint violation = {}'.format(np.min(res[0])))
print('objective value = {}'.format(np.linalg.norm(X.dot(res[0]) - y)))
# Solve with OSQP
from cvxpy import *
beta = Variable(n)
obj = sum_squares(X*beta-y)
constr = [beta >= 0]
prob = Problem(Minimize(obj), constr)
prob.solve(solver='OSQP', verbose=True)
beta = beta.value
print('constraint violation = {}'.format(np.min(beta)))
print('objective value = {}'.format(np.linalg.norm(X.dot(beta)-y)))
# Solve with SCS
from cvxpy import *
beta = Variable(n)
obj = sum_squares(X*beta-y)
constr = [beta >= 0]
prob = Problem(Minimize(obj), constr)
prob.solve(solver='SCS', eps=1e-6, verbose=True)
beta = beta.value
print('constraint violation = {}'.format(np.min(beta)))
print('objective value = {}'.format(np.linalg.norm(X.dot(beta)-y)))
## NNLS (rerun)
tests = paper_plots.TestPaper()
tests.setUp()
tests.test_nnls('../figures/nnls_rerun.pdf')
###Output
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 8.94
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = False, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 16000, constraints m = 8000
nnz(A) = 16000
Setup time: 2.16e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 1.23e+01 1.73e+00 1.22e+01 9.43e-02
100| 3.94e-02 7.55e-04 3.94e-02 6.19e+00
200| 2.26e-02 2.76e-04 2.26e-02 1.47e+01
300| 1.66e-02 1.53e-04 1.66e-02 2.23e+01
400| 1.33e-02 1.01e-04 1.33e-02 3.21e+01
500| 1.10e-02 7.52e-05 1.10e-02 3.98e+01
600| 9.21e-03 6.55e-05 9.21e-03 4.73e+01
700| 7.64e-03 5.37e-05 7.64e-03 5.75e+01
800| 6.36e-03 4.44e-05 6.36e-03 6.65e+01
900| 5.29e-03 3.69e-05 5.29e-03 7.40e+01
999| 4.41e-03 3.07e-05 4.41e-03 8.24e+01
----------------------------------------------------
Status: Reach maximum iterations
Solve time: 8.24e+01
Total number of iterations: 1000
Best total residual: 4.41e-03; reached at iteration 999
======================================================================
Finish DRS.
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 8.94
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 16000, constraints m = 8000
nnz(A) = 16000
Setup time: 2.20e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 1.23e+01 1.73e+00 1.22e+01 1.01e-01
100| 5.14e-03 1.78e-04 5.14e-03 9.60e+00
200| 1.52e-04 4.60e-06 1.52e-04 1.92e+01
300| 5.75e-06 1.65e-07 5.75e-06 2.74e+01
378| 1.12e-06 4.44e-08 1.12e-06 3.35e+01
----------------------------------------------------
Status: Solved
Solve time: 3.35e+01
Total number of iterations: 379
Best total residual: 1.12e-06; reached at iteration 378
======================================================================
nonzero entries proportion = 0.498375
Finish A2DR.
|
onnxruntime/python/tools/transformers/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Install PyTorch 1.6.0 and OnnxRuntime 1.5.1 for CPU-only.
import sys
if sys.platform == 'darwin': # Mac
!{sys.executable} -m pip install --upgrade torch torchvision
else:
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install onnxruntime==1.5.1
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==3.0.2
!{sys.executable} -m pip install onnx onnxconverter_common psutil pytz pandas py-cpuinfo py3nvml netron
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime.transformers.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32|fp16|int8```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8.Here we use a pretrained model as example:
###Code
from onnxruntime.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
Gpt2Helper.export_onnx(model, device, onnx_model_path) # add parameter use_external_data_format=True when model size > 2 GB
###Output
d:\git\transformers\src\transformers\modeling_gpt2.py:714: FutureWarning: The `past` argument is deprecated and will be removed in a future version, use `past_key_values` instead.
FutureWarning,
d:\git\transformers\src\transformers\modeling_gpt2.py:560: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
d:\git\transformers\src\transformers\modeling_gpt2.py:166: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
d:\git\transformers\src\transformers\modeling_gpt2.py:171: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
###Markdown
PyTorch Inference using Huggingface TransformersIn the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ['best hotel in bay area', 'here is an example of gpt2 model']
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
#okenizer.add_special_tokens({'pad_token': '[PAD]'})
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, padding=True)
input_ids = torch.tensor(encodings_dict['input_ids'], dtype=torch.int64)
attention_mask = torch.tensor(encodings_dict['attention_mask'], dtype=torch.float32)
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
#Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
with torch.no_grad():
torch_output = torch_model(input_ids, past=empty_past, attention_mask=attention_mask, position_ids=position_ids)
###Output
_____no_output_____
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
onnx_model_path = "gpt2.onnx"
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy()),
'attention_mask' : numpy.ascontiguousarray(attention_mask.cpu().numpy()),
'position_ids': numpy.ascontiguousarray(position_ids.cpu().numpy())
}
for i, past_i in enumerate(empty_past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close (max difference is 1E-4).
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
###Output
max logits diff (ignored padding) tensor(6.8665e-05)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = Gpt2Helper.prepare_io_binding(session, input_ids, position_ids, attention_mask, past,
output_buffers, output_shapes)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes,
return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, ort_session=None, num_tokens_to_produce = 30):
use_onnxruntime = (ort_session is not None)
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
for step in range(num_tokens_to_produce):
if ort_session is not None:
outputs = inference_with_io_binding(ort_session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(input_ids, attention_mask=attention_mask, position_ids=position_ids, past=past)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = torch.from_numpy(outputs[i + 1]) if isinstance(outputs[i + 1], numpy.ndarray) else outputs[i + 1].clone().detach()
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print("------------")
print(tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text
test_generation(tokenizer, input_text, ort_session=session)
###Output
Text generation using OnnxRuntime ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text)
###Output
Text generation using PyTorch ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Int8 Quantization Next, we will apply dynamic quantization to the model. We optimize the model before quantization to get better performance.Note that text generation result from fp32 and int8 models could be quite different. User shall evaluate the precision metric for your application for both fp32 and int8 models. If the quality of int8 model result is acceptable, you will be glad to find that it is faster than fp32 model in inference. Note that you can leverage [quantization aware training (QAT)](https://pytorch.org/blog/introduction-to-quantization-on-pytorch/) for accuracy improvement if needed.
###Code
from onnxruntime.transformers.quantize_helper import QuantizeHelper
optimized_fp32_model_path = "gpt2_fp32.onnx"
quantized_int8_model_path = "gpt2_int8.onnx"
Gpt2Helper.optimize_onnx("gpt2.onnx", optimized_fp32_model_path, False, model.config.num_attention_heads, model.config.hidden_size)
QuantizeHelper.quantize_onnx_model(optimized_fp32_model_path, quantized_int8_model_path)
session_int8 = onnxruntime.InferenceSession(quantized_int8_model_path)
input_text = ['bert model optimization']
test_generation(tokenizer, input_text, ort_session=session_int8, num_tokens_to_produce=14)
###Output
Text generation using OnnxRuntime ...
------------
bert model optimization, and the NLP model is a generalizable and robust model.
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o --precision int8
###Output
ATen/Parallel:
at::get_num_threads() : 12
at::get_num_interop_threads() : 6
OpenMP 2019
omp_get_max_threads() : 12
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191125 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 12
Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
###Markdown
We can see that quantized model has significant speed up (close to 2x). Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "451.67",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 8480882688,
"name": "GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3.1920 GHz",
"l2_cache": "1536 KB",
"flags": [
"3dnow",
"3dnowprefetch",
"abm",
"acpi",
"adx",
"aes",
"apic",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflush",
"clflushopt",
"cmov",
"cx16",
"cx8",
"de",
"dtes64",
"dts",
"erms",
"est",
"f16c",
"fma",
"fpu",
"fxsr",
"hle",
"ht",
"hypervisor",
"ia64",
"invpcid",
"lahf_lm",
"mca",
"mce",
"mmx",
"movbe",
"mpx",
"msr",
"mtrr",
"osxsave",
"pae",
"pat",
"pbe",
"pcid",
"pclmulqdq",
"pdcm",
"pge",
"pni",
"popcnt",
"pse",
"pse36",
"rdrnd",
"rdseed",
"rtm",
"sep",
"serial",
"sgx",
"sgx_lc",
"smap",
"smep",
"ss",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"tm",
"tm2",
"tsc",
"vme",
"x2apic",
"xsave",
"xtpr"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16971276288,
"available": 6431543296
},
"python": "3.6.10.final.0 (64 bit)",
"os": "Windows-10-10.0.19041-SP0",
"onnxruntime": {
"version": "1.5.1",
"support_gpu": false
},
"onnxruntime_tools": {
"version": "1.4.4"
},
"pytorch": {
"version": "1.6.0+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": {
"version": "2.3.0",
"git_version": "v2.3.0-rc2-23-gb36436b087",
"support_gpu": true
}
}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Install PyTorch 1.6.0 and OnnxRuntime 1.4.0 for CPU-only.
import sys
if sys.platform == 'darwin': # Mac
!{sys.executable} -m pip install --upgrade torch torchvision
else:
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.4.0
!{sys.executable} -m pip install --upgrade onnxruntime-tools
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==3.0.2
!{sys.executable} -m pip install onnx psutil pytz pandas py-cpuinfo py3nvml netron
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime_tools.transformers.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32|fp16|int8```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8.Here we use a pretrained model as example:
###Code
from onnxruntime_tools.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
Gpt2Helper.export_onnx(model, device, onnx_model_path) # add parameter use_external_data_format=True when model size > 2 GB
###Output
D:\Anaconda3\envs\cpu_env\lib\site-packages\transformers\modeling_gpt2.py:445: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
D:\Anaconda3\envs\cpu_env\lib\site-packages\transformers\modeling_gpt2.py:149: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
D:\Anaconda3\envs\cpu_env\lib\site-packages\transformers\modeling_gpt2.py:151: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
###Markdown
PyTorch Inference using Huggingface TransformersIn the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ['ONNX runtime is', 'here is an example of gpt2 model']
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
#okenizer.add_special_tokens({'pad_token': '[PAD]'})
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text, verbose=False):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, pad_to_max_length=True)
input_ids = torch.tensor(encodings_dict['input_ids'], dtype=torch.int64)
attention_mask = torch.tensor(encodings_dict['attention_mask'], dtype=torch.float32)
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
#Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
if verbose:
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
with torch.no_grad():
torch_output = torch_model(input_ids, past=empty_past, attention_mask=attention_mask, position_ids=position_ids)
###Output
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
onnx_model_path = "gpt2.onnx"
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy()),
'attention_mask' : numpy.ascontiguousarray(attention_mask.cpu().numpy()),
'position_ids': numpy.ascontiguousarray(position_ids.cpu().numpy())
}
for i, past_i in enumerate(empty_past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close (max difference is 1E-4).
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
#past_diff = [(torch_output[1][i] - ort_outputs[i + 1]).abs().max() for i in range(num_layer)]
#print("past state diff for each layer", past_diff)
###Output
max logits diff (ignored padding) tensor(0.0001)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = Gpt2Helper.prepare_io_binding(session, input_ids, position_ids, attention_mask, past,
output_buffers, output_shapes)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes,
return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, use_onnxruntime=True):
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
num_tokens_to_produce = 30
for step in range(num_tokens_to_produce):
if use_onnxruntime:
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(input_ids, attention_mask=attention_mask, position_ids=position_ids, past=past)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = torch.from_numpy(outputs[i + 1]) if isinstance(outputs[i + 1], numpy.ndarray) else outputs[i + 1].clone().detach()
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print(f"Example {i}:", tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text
test_generation(tokenizer, input_text, use_onnxruntime=True)
###Output
Text generation using OnnxRuntime ...
Example 0: ONNX runtime is not supported.
The following is a list of the supported languages:
English
French
German
Italian
Japanese
Example 1: here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text, use_onnxruntime=False)
###Output
Text generation using PyTorch ...
Example 0: ONNX runtime is not supported.
The following is a list of the supported languages:
English
French
German
Italian
Japanese
Example 1: here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime_tools.transformers.benchmark_gpt2 -m gpt2 -o
###Output
ATen/Parallel:
at::get_num_threads() : 12
at::get_num_interop_threads() : 6
OpenMP 2019
omp_get_max_threads() : 12
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191125 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 12
Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
###Markdown
Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime_tools.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "442.23",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 5534052352,
"name": "GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3.1920 GHz",
"l2_cache": "1536 KB",
"flags": [
"3dnow",
"3dnowprefetch",
"abm",
"acpi",
"adx",
"aes",
"apic",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflush",
"clflushopt",
"cmov",
"cx16",
"cx8",
"de",
"dtes64",
"dts",
"erms",
"est",
"f16c",
"fma",
"fpu",
"fxsr",
"hle",
"ht",
"hypervisor",
"ia64",
"invpcid",
"lahf_lm",
"mca",
"mce",
"mmx",
"movbe",
"mpx",
"msr",
"mtrr",
"osxsave",
"pae",
"pat",
"pbe",
"pcid",
"pclmulqdq",
"pdcm",
"pge",
"pni",
"popcnt",
"pse",
"pse36",
"rdrnd",
"rdseed",
"rtm",
"sep",
"serial",
"sgx",
"sgx_lc",
"smap",
"smep",
"ss",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"tm",
"tm2",
"tsc",
"vme",
"x2apic",
"xsave",
"xtpr"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16971276288,
"available": 3952107520
},
"python": "3.6.10.final.0 (64 bit)",
"os": "Windows-10-10.0.19041-SP0",
"onnxruntime": {
"version": "1.4.0",
"support_gpu": false
},
"onnxruntime_tools": {
"version": "1.4.2"
},
"pytorch": {
"version": "1.6.0+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": {
"version": "2.3.0",
"git_version": "v2.3.0-rc2-23-gb36436b087",
"support_gpu": true
}
}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envpip install jupyterlabconda install ipykernelipython kernel install --user --name cpu_envjupyter-lab```The last command will launch JupyterLab, then we can open this notebook and select kernel cpu_env to run it.
###Code
# Install CPU-only PyTorch 1.10.1 and OnnxRuntime 1.12.0 packages, and other packages used in this notebook.
import sys
if sys.platform == "darwin": # Mac
!{sys.executable} -m pip install torch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 >pip_output.txt
else:
!{sys.executable} -m pip install torch==1.10.1+cpu torchvision==0.11.2+cpu torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html --no-warn-script-location >pip_output.txt
!{sys.executable} -m pip install flatbuffers >>pip_output.txt
# This notebook requires onnxruntime 1.12.0 or later, use ort-nightly package until 1.12 is released.
# Please do not install both onnxruntime and ort-nightly at the same time.
#!{sys.executable} -m pip install onnxruntime==1.12.0
!{sys.executable} -m pip install -i https://test.pypi.org/simple/ ort-nightly >>pip_output.txt
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==4.18.0 onnx==1.11.0 psutil pytz pandas py-cpuinfo py3nvml netron coloredlogs ipywidgets --no-warn-script-location >>pip_output.txt
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/models/gpt2/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime.transformers.models.gpt2.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32python -m onnxruntime.transformers.models.gpt2.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp16 --auto_mixed_precision```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8. Mixed precision model by --auto_mixed_precision is recommended for GPU inference. For CPU inference, fp32 model is recommended since int8 model might have large accuracy loss.Here we use a pretrained model as example:
###Code
from packaging import version
from onnxruntime import __version__ as ort_version
if version.parse(ort_version) >= version.parse("1.12.0"):
from onnxruntime.transformers.models.gpt2.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
else:
from onnxruntime.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
raise RuntimeError("Please install onnxruntime 1.12.0 or later to run this notebook")
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
!{sys.executable} -m onnxruntime.transformers.models.gpt2.convert_to_onnx -m $model_name_or_path --output $onnx_model_path -o -p fp32 --use_int32_inputs -t 10>export_output.txt 2>&1
###Output
_____no_output_____
###Markdown
PyTorch Inference using Huggingface Transformers In the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ["best hotel in bay area", "here is an example of gpt2 model"]
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, padding=True)
input_ids = torch.tensor(encodings_dict["input_ids"], dtype=torch.int32)
attention_mask = torch.tensor(encodings_dict["attention_mask"], dtype=torch.int32)
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(position_ids < 0, 0)
position_ids = position_ids.to(torch.int32)
# Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
with torch.no_grad():
torch_output = torch_model(
input_ids, past_key_values=empty_past, attention_mask=attention_mask, position_ids=position_ids
)
###Output
_____no_output_____
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {
"input_ids": numpy.ascontiguousarray(input_ids.cpu().numpy()),
"attention_mask": numpy.ascontiguousarray(attention_mask.cpu().numpy()),
"position_ids": numpy.ascontiguousarray(position_ids.cpu().numpy()),
}
for i, past_i in enumerate(empty_past):
ort_inputs[f"past_{i}"] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close.
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
###Output
max logits diff (ignored padding) tensor(7.6294e-05)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
from typing import List, Dict
from onnxruntime import InferenceSession
from onnxruntime.transformers.io_binding_helper import TypeHelper
from onnxruntime.transformers.io_binding_helper import IOBindingHelper
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(
batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config,
)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = IOBindingHelper.prepare_io_binding(
session, input_ids, position_ids, attention_mask, past, output_buffers, output_shapes
)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes, return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, ort_session=None, num_tokens_to_produce=30):
assert len(input_text) == 1 # This function requires batch_size==1
use_onnxruntime = ort_session is not None
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
for step in range(num_tokens_to_produce):
if ort_session is not None:
outputs = inference_with_io_binding(ort_session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(
input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past
)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:, -1] + 1).reshape(batch_size, 1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = (
torch.from_numpy(outputs[i + 1])
if isinstance(outputs[i + 1], numpy.ndarray)
else outputs[i + 1].clone().detach()
)
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print("------------")
print(tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text[:1]
test_generation(tokenizer, input_text, ort_session=session)
###Output
Text generation using OnnxRuntime ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text)
###Output
Text generation using PyTorch ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime.transformers.models.gpt2.benchmark_gpt2 -m gpt2 -o >benchmark_output.txt 2>&1
file = open("benchmark_output.txt", "r")
for line in file.readlines():
if "onnxruntime_latency" in line:
print(line)
###Output
batch_size=1, sequence_length=1, past_sequence_length=8, torch_latency=37.48, onnxruntime_latency=24.77, onnxruntime_io_binding_latency=24.65
batch_size=1, sequence_length=1, past_sequence_length=16, torch_latency=37.30, onnxruntime_latency=24.95, onnxruntime_io_binding_latency=24.62
batch_size=1, sequence_length=1, past_sequence_length=32, torch_latency=37.88, onnxruntime_latency=25.19, onnxruntime_io_binding_latency=22.05
batch_size=1, sequence_length=1, past_sequence_length=64, torch_latency=42.60, onnxruntime_latency=25.64, onnxruntime_io_binding_latency=25.08
batch_size=1, sequence_length=1, past_sequence_length=128, torch_latency=45.89, onnxruntime_latency=27.66, onnxruntime_io_binding_latency=25.71
batch_size=1, sequence_length=1, past_sequence_length=256, torch_latency=52.47, onnxruntime_latency=32.24, onnxruntime_io_binding_latency=25.46
###Markdown
Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "471.11",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 7099449344,
"name": "NVIDIA GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3192000000,0",
"l2_cache": 1572864,
"flags": "3dnow,3dnowprefetch,abm,acpi,adx,aes,apic,avx,avx2,bmi1,bmi2,clflush,clflushopt,cmov,cx16,cx8,de,dtes64,dts,erms,est,f16c,fma,fpu,fxsr,hle,ht,hypervisor,ia64,invpcid,lahf_lm,mca,mce,mmx,movbe,mpx,msr,mtrr,osxsave,pae,pat,pbe,pcid,pclmulqdq,pdcm,pge,pni,popcnt,pse,pse36,rdrnd,rdseed,rtm,sep,serial,sgx,sgx_lc,smap,smep,ss,sse,sse2,sse4_1,sse4_2,ssse3,tm,tm2,tsc,tscdeadline,vme,x2apic,xsave,xtpr",
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16977195008,
"available": 7204651008
},
"os": "Windows-10-10.0.22000-SP0",
"python": "3.8.13.final.0 (64 bit)",
"packages": {
"sympy": "1.5.1",
"transformers": "4.18.0",
"protobuf": "3.20.1",
"flatbuffers": "2.0",
"numpy": "1.22.3",
"ort-nightly": "1.12.0.dev20220428004",
"onnx": "1.11.0",
"torch": "1.10.1+cpu",
"onnxconverter-common": "1.9.0"
},
"onnxruntime": {
"version": "1.12.0",
"support_gpu": false
},
"pytorch": {
"version": "1.10.1+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": null
}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime.**Note: this work is still in progresss. Need install ort_nightly package before onnxruntime 1.3.0 is ready. The performance number of ort_nightly does not reflect the final result for onnxruntime 1.3.0. ** Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.6conda activate cpu_envconda install pytorch torchvision cpuonly -c pytorchpip install onnxruntimepip install transformers==2.5.1pip install onnx psutil pytz pandas py-cpuinfo py3nvml netronconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Enable pass state in input.
enable_past_input = False
import os
cache_dir = "./gpt2"
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
output_dir = './gpt2_onnx'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
###Output
_____no_output_____
###Markdown
Benchmark You will need git clone the onnxruntime repository like```consolegit clone https://github.com/microsoft/onnxruntime.git```Then update the bert_tools_dir according to the path in your machine.
###Code
# Assume you have git clone the repository of onnxruntime from github.
bert_tools_dir = r'D:\Git\onnxruntime\onnxruntime\python\tools\bert'
benchmark_script = os.path.join(bert_tools_dir, 'benchmark_gpt2.py')
if enable_past_input:
%run $benchmark_script --model_type gpt2 --cache_dir $cache_dir --output_dir $output_dir --enable_optimization --enable_past_input
else:
%run $benchmark_script --model_type gpt2 --cache_dir $cache_dir --output_dir $output_dir --enable_optimization
###Output
_____no_output_____
###Markdown
If you only need the benchmark results. You can skip the remaining parts.In the following, we will introduce the benchmark script. Load pretrained model
###Code
from transformers import GPT2Model, GPT2Tokenizer
model_class, tokenizer_class, model_name_or_path = (GPT2Model, GPT2Tokenizer, 'gpt2')
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = model_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model.eval().cpu()
import numpy
import time
def pytorch_inference(model, input_ids, past=None, total_runs = 100):
latency = []
with torch.no_grad():
for _ in range(total_runs):
start = time.time()
outputs = model(input_ids=input_ids, past=past)
latency.append(time.time() - start)
if total_runs > 1:
print("PyTorch Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
return outputs
def onnxruntime_inference(ort_session, input_ids, past=None, total_runs=100):
# Use contiguous array as input might improve performance.
# You can check the results from performance test tool to see whether you need it.
ort_inputs = {
'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy())
}
if past is not None:
for i, past_i in enumerate(past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past[i].cpu().numpy())
latency = []
for _ in range(total_runs):
start = time.time()
ort_outputs = ort_session.run(None, ort_inputs)
latency.append(time.time() - start)
if total_runs > 1:
print("OnnxRuntime Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
return ort_outputs
def inference(model, ort_session, input_ids, past=None, total_runs=100, verify_outputs=True):
outputs = pytorch_inference(model, input_ids, past, total_runs)
ort_outputs = onnxruntime_inference(ort_session, input_ids, past, total_runs)
if verify_outputs:
print('PyTorch and OnnxRuntime output 0 (last_state) are close:'.format(0), numpy.allclose(ort_outputs[0], outputs[0].cpu(), rtol=1e-05, atol=1e-04))
if enable_past_input:
for layer in range(model.config.n_layer):
print('PyTorch and OnnxRuntime layer {} state (present_{}) are close:'.format(layer, layer), numpy.allclose(ort_outputs[1 + layer], outputs[1][layer].cpu(), rtol=1e-05, atol=1e-04))
import torch
import os
inputs = tokenizer.encode_plus("Here is an example input for GPT2 model", add_special_tokens=True, return_tensors='pt')
input_ids = inputs['input_ids']
# run without past so that we can know the shape of past from output.
outputs = model(input_ids=input_ids, past=None)
num_layer = model.config.n_layer
present_names = [f'present_{i}' for i in range(num_layer)]
output_names = ["last_state"] + present_names
input_names = ['input_ids']
dynamic_axes= {'input_ids': {0: 'batch_size', 1: 'seq_len'},
#'token_type_ids' : {0: 'batch_size', 1: 'seq_len'},
#'attention_mask' : {0: 'batch_size', 1: 'seq_len'},
'last_state' : {0: 'batch_size', 1: 'seq_len'}
}
for name in present_names:
dynamic_axes[name] = {1: 'batch_size', 3: 'seq_len'}
if enable_past_input:
past_names = [f'past_{i}' for i in range(num_layer)]
input_names = ['input_ids'] + past_names #+ ['token_type_ids', 'attention_mask']
dummy_past = [torch.zeros(list(outputs[1][0].shape)) for _ in range(num_layer)]
for name in past_names:
dynamic_axes[name] = {1: 'batch_size', 3: 'seq_len'}
export_inputs = (inputs['input_ids'], tuple(dummy_past)) #, inputs['token_type_ids'], inputs['attention_mask'])
else:
export_inputs = (inputs['input_ids'])
export_model_path = os.path.join(output_dir, 'gpt2_past{}.onnx'.format(int(enable_past_input)))
torch.onnx.export(model,
args=export_inputs,
f=export_model_path,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
opset_version=11,
do_constant_folding = True,
verbose=False)
def remove_past_outputs(export_model_path, output_model_path):
from onnx import ModelProto
from OnnxModel import OnnxModel
model = ModelProto()
with open(export_model_path, "rb") as f:
model.ParseFromString(f.read())
bert_model = OnnxModel(model)
# remove past state outputs and only keep the first output.
keep_output_names = [bert_model.model.graph.output[0].name]
logger.info(f"Prune graph to keep the first output and drop past state outputs:{keep_output_names}")
bert_model.prune_graph(keep_output_names)
bert_model.save_model_to_file(output_model_path)
if enable_past_input:
onnx_model_path = export_model_path
else:
onnx_model_path = os.path.join(output_dir, 'gpt2_past{}_out1.onnx'.format(int(enable_past_input)))
remove_past_outputs(export_model_path, onnx_model_path)
###Output
_____no_output_____
###Markdown
Inference with ONNX Runtime OpenMP Environment VariableOpenMP environment variables are very important for CPU inference of GPT2 model. It has large performance impact on GPT2 model so you might need set it carefully according to benchmark script.Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect.
###Code
import psutil
# You may change the settings in this cell according to Performance Test Tool result.
use_openmp = True
# ATTENTION: these environment variables must be set before importing onnxruntime.
if use_openmp:
os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True))
else:
os.environ["OMP_NUM_THREADS"] = '1'
os.environ["OMP_WAIT_POLICY"] = 'ACTIVE'
import onnxruntime
import numpy
# Print warning if user uses onnxruntime-gpu instead of onnxruntime package.
if 'CUDAExecutionProvider' in onnxruntime.get_available_providers():
print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package to test CPU inference.")
sess_options = onnxruntime.SessionOptions()
# Optional: store the optimized graph and view it using Netron to verify that model is fully optimized.
# Note that this will increase session creation time, so it is for debugging only.
#sess_options.optimized_model_filepath = os.path.join(output_dir, "optimized_model_cpu.onnx")
if use_openmp:
sess_options.intra_op_num_threads=1
else:
sess_options.intra_op_num_threads=psutil.cpu_count(logical=True)
# Specify providers when you use onnxruntime-gpu for CPU inference.
session = onnxruntime.InferenceSession(onnx_model_path, sess_options, providers=['CPUExecutionProvider'])
# Compare PyTorch and OnnxRuntime inference performance and results
%time inference(model, session, input_ids, past=dummy_past if enable_past_input else None)
import gc
del session
gc.collect()
optimized_model = os.path.join(output_dir, 'gpt2_past{}_optimized.onnx'.format(int(enable_past_input)))
bert_opt_script = os.path.join(bert_tools_dir, 'optimizer.py')
# Local directory corresponding to https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers/
%run $bert_opt_script --model_type gpt2 --input $onnx_model_path --output $optimized_model --opt_level 0
session = onnxruntime.InferenceSession(optimized_model, sess_options, providers=['CPUExecutionProvider'])
%time inference(model, session, input_ids, past=dummy_past if enable_past_input else None, verify_outputs=False)
###Output
_____no_output_____
###Markdown
Additional InfoNote that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the benchmark script in a console to get more accurate performance numbers.[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.You might get slower or faster result based on your hardware.
###Code
machine_info_script = os.path.join(bert_tools_dir, 'MachineInfo.py')
%run $machine_info_script --silent
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Install PyTorch 1.6.0 and OnnxRuntime 1.5.1 for CPU-only.
import sys
if sys.platform == 'darwin': # Mac
!{sys.executable} -m pip install --upgrade torch torchvision
else:
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install onnxruntime==1.5.1
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==3.0.2
!{sys.executable} -m pip install onnx psutil pytz pandas py-cpuinfo py3nvml netron
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime.transformers.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32|fp16|int8```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8.Here we use a pretrained model as example:
###Code
from onnxruntime.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
Gpt2Helper.export_onnx(model, device, onnx_model_path) # add parameter use_external_data_format=True when model size > 2 GB
###Output
d:\git\transformers\src\transformers\modeling_gpt2.py:714: FutureWarning: The `past` argument is deprecated and will be removed in a future version, use `past_key_values` instead.
FutureWarning,
d:\git\transformers\src\transformers\modeling_gpt2.py:560: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
d:\git\transformers\src\transformers\modeling_gpt2.py:166: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
d:\git\transformers\src\transformers\modeling_gpt2.py:171: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
###Markdown
PyTorch Inference using Huggingface TransformersIn the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ['best hotel in bay area', 'here is an example of gpt2 model']
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
#okenizer.add_special_tokens({'pad_token': '[PAD]'})
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, padding=True)
input_ids = torch.tensor(encodings_dict['input_ids'], dtype=torch.int64)
attention_mask = torch.tensor(encodings_dict['attention_mask'], dtype=torch.float32)
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
#Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
with torch.no_grad():
torch_output = torch_model(input_ids, past=empty_past, attention_mask=attention_mask, position_ids=position_ids)
###Output
_____no_output_____
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
onnx_model_path = "gpt2.onnx"
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy()),
'attention_mask' : numpy.ascontiguousarray(attention_mask.cpu().numpy()),
'position_ids': numpy.ascontiguousarray(position_ids.cpu().numpy())
}
for i, past_i in enumerate(empty_past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close (max difference is 1E-4).
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
###Output
max logits diff (ignored padding) tensor(6.8665e-05)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = Gpt2Helper.prepare_io_binding(session, input_ids, position_ids, attention_mask, past,
output_buffers, output_shapes)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes,
return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, ort_session=None, num_tokens_to_produce = 30):
use_onnxruntime = (ort_session is not None)
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
for step in range(num_tokens_to_produce):
if ort_session is not None:
outputs = inference_with_io_binding(ort_session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(input_ids, attention_mask=attention_mask, position_ids=position_ids, past=past)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = torch.from_numpy(outputs[i + 1]) if isinstance(outputs[i + 1], numpy.ndarray) else outputs[i + 1].clone().detach()
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print("------------")
print(tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text
test_generation(tokenizer, input_text, ort_session=session)
###Output
Text generation using OnnxRuntime ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text)
###Output
Text generation using PyTorch ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Int8 Quantization Next, we will apply dynamic quantization to the model. We optimize the model before quantization to get better performance.Note that text generation result from fp32 and int8 models could be quite different. User shall evaluate the precision metric for your application for both fp32 and int8 models. If the quality of int8 model result is acceptable, you will be glad to find that it is faster than fp32 model in inference. Note that you can leverage [quantization aware training (QAT)](https://pytorch.org/blog/introduction-to-quantization-on-pytorch/) for accuracy improvement if needed.
###Code
from onnxruntime.transformers.quantize_helper import QuantizeHelper
optimized_fp32_model_path = "gpt2_fp32.onnx"
quantized_int8_model_path = "gpt2_int8.onnx"
Gpt2Helper.optimize_onnx("gpt2.onnx", optimized_fp32_model_path, False, model.config.num_attention_heads, model.config.hidden_size)
QuantizeHelper.quantize_onnx_model(optimized_fp32_model_path, quantized_int8_model_path)
session_int8 = onnxruntime.InferenceSession(quantized_int8_model_path)
input_text = ['bert model optimization']
test_generation(tokenizer, input_text, ort_session=session_int8, num_tokens_to_produce=14)
###Output
Text generation using OnnxRuntime ...
------------
bert model optimization, and the NLP model is a generalizable and robust model.
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o --precision int8
###Output
ATen/Parallel:
at::get_num_threads() : 12
at::get_num_interop_threads() : 6
OpenMP 2019
omp_get_max_threads() : 12
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191125 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 12
Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
###Markdown
We can see that quantized model has significant speed up (close to 2x). Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "451.67",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 8480882688,
"name": "GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3.1920 GHz",
"l2_cache": "1536 KB",
"flags": [
"3dnow",
"3dnowprefetch",
"abm",
"acpi",
"adx",
"aes",
"apic",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflush",
"clflushopt",
"cmov",
"cx16",
"cx8",
"de",
"dtes64",
"dts",
"erms",
"est",
"f16c",
"fma",
"fpu",
"fxsr",
"hle",
"ht",
"hypervisor",
"ia64",
"invpcid",
"lahf_lm",
"mca",
"mce",
"mmx",
"movbe",
"mpx",
"msr",
"mtrr",
"osxsave",
"pae",
"pat",
"pbe",
"pcid",
"pclmulqdq",
"pdcm",
"pge",
"pni",
"popcnt",
"pse",
"pse36",
"rdrnd",
"rdseed",
"rtm",
"sep",
"serial",
"sgx",
"sgx_lc",
"smap",
"smep",
"ss",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"tm",
"tm2",
"tsc",
"vme",
"x2apic",
"xsave",
"xtpr"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16971276288,
"available": 6431543296
},
"python": "3.6.10.final.0 (64 bit)",
"os": "Windows-10-10.0.19041-SP0",
"onnxruntime": {
"version": "1.5.1",
"support_gpu": false
},
"onnxruntime_tools": {
"version": "1.4.4"
},
"pytorch": {
"version": "1.6.0+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": {
"version": "2.3.0",
"git_version": "v2.3.0-rc2-23-gb36436b087",
"support_gpu": true
}
}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Install PyTorch 1.6.0 and OnnxRuntime 1.5.1 for CPU-only.
import sys
if sys.platform == 'darwin': # Mac
!{sys.executable} -m pip install --upgrade torch torchvision
else:
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install onnxruntime==1.5.1
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==3.0.2
!{sys.executable} -m pip install onnx onnxconverter_common psutil pytz pandas py-cpuinfo py3nvml netron
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime.transformers.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32|fp16|int8```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8.Here we use a pretrained model as example:
###Code
from onnxruntime.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
Gpt2Helper.export_onnx(model, device, onnx_model_path) # add parameter use_external_data_format=True when model size > 2 GB
###Output
d:\git\transformers\src\transformers\modeling_gpt2.py:714: FutureWarning: The `past` argument is deprecated and will be removed in a future version, use `past_key_values` instead.
FutureWarning,
d:\git\transformers\src\transformers\modeling_gpt2.py:560: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
d:\git\transformers\src\transformers\modeling_gpt2.py:166: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
d:\git\transformers\src\transformers\modeling_gpt2.py:171: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
###Markdown
PyTorch Inference using Huggingface TransformersIn the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ['best hotel in bay area', 'here is an example of gpt2 model']
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
#okenizer.add_special_tokens({'pad_token': '[PAD]'})
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, padding=True)
input_ids = torch.tensor(encodings_dict['input_ids'], dtype=torch.int64)
attention_mask = torch.tensor(encodings_dict['attention_mask'], dtype=torch.float32)
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
#Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
with torch.no_grad():
torch_output = torch_model(input_ids, past=empty_past, attention_mask=attention_mask, position_ids=position_ids)
###Output
_____no_output_____
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
onnx_model_path = "gpt2.onnx"
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy()),
'attention_mask' : numpy.ascontiguousarray(attention_mask.cpu().numpy()),
'position_ids': numpy.ascontiguousarray(position_ids.cpu().numpy())
}
for i, past_i in enumerate(empty_past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close (max difference is 1E-4).
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
###Output
max logits diff (ignored padding) tensor(6.8665e-05)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = Gpt2Helper.prepare_io_binding(session, input_ids, position_ids, attention_mask, past,
output_buffers, output_shapes)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes,
return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, ort_session=None, num_tokens_to_produce = 30):
use_onnxruntime = (ort_session is not None)
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
for step in range(num_tokens_to_produce):
if ort_session is not None:
outputs = inference_with_io_binding(ort_session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(input_ids, attention_mask=attention_mask, position_ids=position_ids, past=past)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = torch.from_numpy(outputs[i + 1]) if isinstance(outputs[i + 1], numpy.ndarray) else outputs[i + 1].clone().detach()
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print("------------")
print(tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text
test_generation(tokenizer, input_text, ort_session=session)
###Output
Text generation using OnnxRuntime ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text)
###Output
Text generation using PyTorch ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Int8 Quantization Next, we will apply dynamic quantization to the model. We optimize the model before quantization to get better performance.Note that text generation result from fp32 and int8 models could be quite different. User shall evaluate the precision metric for your application for both fp32 and int8 models. If the quality of int8 model result is acceptable, you will be glad to find that it is faster than fp32 model in inference. Note that you can leverage [quantization aware training (QAT)](https://pytorch.org/blog/introduction-to-quantization-on-pytorch/) for accuracy improvement if needed.
###Code
from onnxruntime.transformers.quantize_helper import QuantizeHelper
optimized_fp32_model_path = "gpt2_fp32.onnx"
quantized_int8_model_path = "gpt2_int8.onnx"
Gpt2Helper.optimize_onnx("gpt2.onnx", optimized_fp32_model_path, False, model.config.num_attention_heads, model.config.hidden_size)
QuantizeHelper.quantize_onnx_model(optimized_fp32_model_path, quantized_int8_model_path)
session_int8 = onnxruntime.InferenceSession(quantized_int8_model_path)
input_text = ['bert model optimization']
test_generation(tokenizer, input_text, ort_session=session_int8, num_tokens_to_produce=14)
###Output
Text generation using OnnxRuntime ...
------------
bert model optimization, and the NLP model is a generalizable and robust model.
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o --precision int8
###Output
ATen/Parallel:
at::get_num_threads() : 12
at::get_num_interop_threads() : 6
OpenMP 2019
omp_get_max_threads() : 12
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191125 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 12
Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
###Markdown
We can see that quantized model has significant speed up (close to 2x). Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "451.67",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 8480882688,
"name": "GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3.1920 GHz",
"l2_cache": "1536 KB",
"flags": [
"3dnow",
"3dnowprefetch",
"abm",
"acpi",
"adx",
"aes",
"apic",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflush",
"clflushopt",
"cmov",
"cx16",
"cx8",
"de",
"dtes64",
"dts",
"erms",
"est",
"f16c",
"fma",
"fpu",
"fxsr",
"hle",
"ht",
"hypervisor",
"ia64",
"invpcid",
"lahf_lm",
"mca",
"mce",
"mmx",
"movbe",
"mpx",
"msr",
"mtrr",
"osxsave",
"pae",
"pat",
"pbe",
"pcid",
"pclmulqdq",
"pdcm",
"pge",
"pni",
"popcnt",
"pse",
"pse36",
"rdrnd",
"rdseed",
"rtm",
"sep",
"serial",
"sgx",
"sgx_lc",
"smap",
"smep",
"ss",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"tm",
"tm2",
"tsc",
"vme",
"x2apic",
"xsave",
"xtpr"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16971276288,
"available": 6431543296
},
"python": "3.6.10.final.0 (64 bit)",
"os": "Windows-10-10.0.19041-SP0",
"onnxruntime": {
"version": "1.5.1",
"support_gpu": false
},
"onnxruntime_tools": {
"version": "1.4.4"
},
"pytorch": {
"version": "1.6.0+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": {
"version": "2.3.0",
"git_version": "v2.3.0-rc2-23-gb36436b087",
"support_gpu": true
}
}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime using IO Binding. Note that past state is used to get better performance. Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
###Code
# Install PyTorch 1.6.0 and OnnxRuntime 1.5.1 for CPU-only.
import sys
if sys.platform == 'darwin': # Mac
!{sys.executable} -m pip install --upgrade torch torchvision
else:
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install onnxruntime==1.5.1
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==3.0.2
!{sys.executable} -m pip install onnx onnxconverter_common psutil pytz pandas py-cpuinfo py3nvml netron
import os
# Create a cache directory to store pretrained model.
cache_dir = os.path.join(".", "cache_models")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
###Output
_____no_output_____
###Markdown
Convert GPT2 model from PyTorch to ONNX We have a script [convert_to_onnx.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/convert_to_onnx.py) that could help you to convert GPT2 with past state to ONNX. The script accepts a pretrained model name or path of a checkpoint directory as input, and converts the model to ONNX. It also verifies that the ONNX model could generate same input as the pytorch model. The usage is like ```python -m onnxruntime.transformers.convert_to_onnx -m model_name_or_path --output gpt2.onnx -o -p fp32|fp16|int8```The -p option can be used to choose the precision: fp32 (float32), fp16 (mixed precision) or int8 (quantization). The -o option will generate optimized model, which is required for fp16 or int8.Here we use a pretrained model as example:
###Code
from packaging import version
from onnxruntime import __version__ as ort_verison
if version.parse(ort_verison) >= version.parse('1.12.0'):
from onnxruntime.transformers.models.gpt2.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
else:
from onnxruntime.transformers.gpt2_helper import Gpt2Helper, MyGPT2LMHeadModel
from transformers import AutoConfig
import torch
model_name_or_path = "gpt2"
config = AutoConfig.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = MyGPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
model.eval().to(device)
print(model.config)
num_attention_heads = model.config.n_head
hidden_size = model.config.n_embd
num_layer = model.config.n_layer
onnx_model_path = "gpt2.onnx"
Gpt2Helper.export_onnx(model, device, onnx_model_path) # add parameter use_external_data_format=True when model size > 2 GB
###Output
d:\git\transformers\src\transformers\modeling_gpt2.py:714: FutureWarning: The `past` argument is deprecated and will be removed in a future version, use `past_key_values` instead.
FutureWarning,
d:\git\transformers\src\transformers\modeling_gpt2.py:560: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
d:\git\transformers\src\transformers\modeling_gpt2.py:166: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
d:\git\transformers\src\transformers\modeling_gpt2.py:171: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
###Markdown
PyTorch Inference using Huggingface TransformersIn the following, we will use an example input to get the output from PyTorch for comparison purpose.For the first inference, there is no any past state. We can prepare empty state for input.
###Code
from transformers import AutoTokenizer
EXAMPLE_Text = ['best hotel in bay area', 'here is an example of gpt2 model']
def get_tokenizer(model_name_or_path, cache_dir):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
#okenizer.add_special_tokens({'pad_token': '[PAD]'})
return tokenizer
def get_example_inputs(prompt_text=EXAMPLE_Text):
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
encodings_dict = tokenizer.batch_encode_plus(prompt_text, padding=True)
input_ids = torch.tensor(encodings_dict['input_ids'], dtype=torch.int64)
attention_mask = torch.tensor(encodings_dict['attention_mask'], dtype=torch.float32)
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
#Empty Past State for generating first word
empty_past = []
batch_size = input_ids.size(0)
sequence_length = input_ids.size(1)
past_shape = [2, batch_size, num_attention_heads, 0, hidden_size // num_attention_heads]
for i in range(num_layer):
empty_past.append(torch.empty(past_shape).type(torch.float32).to(device))
return input_ids, attention_mask, position_ids, empty_past
from transformers import GPT2LMHeadModel
torch_model = GPT2LMHeadModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
device = torch.device("cpu")
torch_model.eval().to(device)
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
print("input_ids", input_ids)
print("attention_mask", attention_mask)
print("position_ids", position_ids)
with torch.no_grad():
torch_output = torch_model(input_ids, past=empty_past, attention_mask=attention_mask, position_ids=position_ids)
###Output
_____no_output_____
###Markdown
ONNX Runtime Inference We can use ONNX Runtime to inference. The inputs are dictionary with name and numpy array as value, and the output is list of numpy array. Note that both input and output are in CPU. When you run the inference in GPU, it will involve data copy between CPU and GPU for input and output.Let's create an inference session for ONNX Runtime given the exported ONNX model, and see the output.
###Code
import onnxruntime
import numpy
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
onnx_model_path = "gpt2.onnx"
session = onnxruntime.InferenceSession(onnx_model_path)
ort_inputs = {'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy()),
'attention_mask' : numpy.ascontiguousarray(attention_mask.cpu().numpy()),
'position_ids': numpy.ascontiguousarray(position_ids.cpu().numpy())
}
for i, past_i in enumerate(empty_past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past_i.cpu().numpy())
ort_outputs = session.run(None, ort_inputs)
###Output
_____no_output_____
###Markdown
We can compare the outputs from PyTorch and ONNX Runtime. Logits are very close (max difference is 1E-4).
###Code
logits_masked_diff = (torch_output[0] - ort_outputs[0]) * attention_mask.unsqueeze(2)
max_logits_diff = logits_masked_diff.abs().max()
print("max logits diff (ignored padding)", max_logits_diff)
###Output
max logits diff (ignored padding) tensor(6.8665e-05)
###Markdown
ONNX Runtime Inference with IO Binding To avoid data copy for input and output, ONNX Runtime also supports IO Binding. User could provide some buffer for input and outputs. For GPU inference, the buffer can be in GPU to reduce memory copy between CPU and GPU. This is helpful for high performance inference in GPU. For GPT-2, IO Binding might help the performance when batch size or (past) sequence length is large.
###Code
def inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, past):
output_shapes = Gpt2Helper.get_output_shapes(batch_size=input_ids.size(0),
past_sequence_length=past[0].size(3),
sequence_length=input_ids.size(1),
config=config)
output_buffers = Gpt2Helper.get_output_buffers(output_shapes, device)
io_binding = Gpt2Helper.prepare_io_binding(session, input_ids, position_ids, attention_mask, past,
output_buffers, output_shapes)
session.run_with_iobinding(io_binding)
outputs = Gpt2Helper.get_outputs_from_io_binding_buffer(session, output_buffers, output_shapes,
return_numpy=False)
return outputs
###Output
_____no_output_____
###Markdown
We can see that the result is exactly same with/without IO Binding:
###Code
input_ids, attention_mask, position_ids, empty_past = get_example_inputs()
outputs = inference_with_io_binding(session, config, input_ids, position_ids, attention_mask, empty_past)
for i in range(len(outputs)):
assert torch.eq(outputs[i], torch.from_numpy(ort_outputs[i])).all()
print("IO Binding result is good")
###Output
IO Binding result is good
###Markdown
Batch Text Generation Here is an example for text generation using ONNX Runtime or PyTorch. For ONNX Runtime, IO Binding is used for better performance.
###Code
def test_generation(tokenizer, input_text, ort_session=None, num_tokens_to_produce = 30):
use_onnxruntime = (ort_session is not None)
print("Text generation using", "OnnxRuntime" if use_onnxruntime else "PyTorch", "...")
eos_token_id = tokenizer.eos_token_id
input_ids, attention_mask, position_ids, past = get_example_inputs(input_text)
batch_size = input_ids.size(0)
has_eos = torch.zeros(batch_size, dtype=torch.bool)
all_token_ids = input_ids.clone()
for step in range(num_tokens_to_produce):
if ort_session is not None:
outputs = inference_with_io_binding(ort_session, config, input_ids, position_ids, attention_mask, past)
else:
outputs = torch_model(input_ids, attention_mask=attention_mask, position_ids=position_ids, past=past)
next_token_logits = outputs[0][:, -1, :]
# Greedy approach is used here. You can easily extend it to use beam search and sampling to pick next tokens.
next_tokens = torch.argmax(next_token_logits, dim=-1)
has_eos = has_eos | (next_tokens == eos_token_id)
tokens_to_add = next_tokens.masked_fill(has_eos, eos_token_id)
all_token_ids = torch.cat([all_token_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
# Update input_ids, attention_mask, position_ids and past
input_ids = tokens_to_add.clone().detach().reshape([batch_size, 1]).to(device)
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([batch_size, 1]).type_as(attention_mask)], 1).to(device)
past = []
if not use_onnxruntime:
past = list(outputs[1]) # past in torch output is tuple
else:
for i in range(num_layer):
past_i = torch.from_numpy(outputs[i + 1]) if isinstance(outputs[i + 1], numpy.ndarray) else outputs[i + 1].clone().detach()
past.append(past_i.to(device))
if torch.all(has_eos):
break
for i, output in enumerate(all_token_ids):
print("------------")
print(tokenizer.decode(output, skip_special_tokens=True))
tokenizer = get_tokenizer(model_name_or_path, cache_dir)
input_text = EXAMPLE_Text
test_generation(tokenizer, input_text, ort_session=session)
###Output
Text generation using OnnxRuntime ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Next, we use PyTorch to run again and we can see that the result is exactly same.
###Code
test_generation(tokenizer, input_text)
###Output
Text generation using PyTorch ...
------------
best hotel in bay area.
The hotel is located in the historic Bayview neighborhood of San Francisco.
The hotel is open daily from 9 a.m.
------------
here is an example of gpt2 model.
The gpt2 model is a simple, but powerful, way to generate a GPT2-like data structure. It is a
###Markdown
Int8 Quantization Next, we will apply dynamic quantization to the model. We optimize the model before quantization to get better performance.Note that text generation result from fp32 and int8 models could be quite different. User shall evaluate the precision metric for your application for both fp32 and int8 models. If the quality of int8 model result is acceptable, you will be glad to find that it is faster than fp32 model in inference. Note that you can leverage [quantization aware training (QAT)](https://pytorch.org/blog/introduction-to-quantization-on-pytorch/) for accuracy improvement if needed.
###Code
from onnxruntime.transformers.quantize_helper import QuantizeHelper
optimized_fp32_model_path = "gpt2_fp32.onnx"
quantized_int8_model_path = "gpt2_int8.onnx"
Gpt2Helper.optimize_onnx("gpt2.onnx", optimized_fp32_model_path, False, model.config.num_attention_heads, model.config.hidden_size)
QuantizeHelper.quantize_onnx_model(optimized_fp32_model_path, quantized_int8_model_path)
session_int8 = onnxruntime.InferenceSession(quantized_int8_model_path)
input_text = ['bert model optimization']
test_generation(tokenizer, input_text, ort_session=session_int8, num_tokens_to_produce=14)
###Output
Text generation using OnnxRuntime ...
------------
bert model optimization, and the NLP model is a generalizable and robust model.
###Markdown
Benchmark There is a tool benchmark_gpt2.py, which can be used to measure the performance of GPT-2 by PyTorch, ONNX Runtime without/with IO Binding.
###Code
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o
!{sys.executable} -m onnxruntime.transformers.benchmark_gpt2 -m gpt2 -o --precision int8
###Output
ATen/Parallel:
at::get_num_threads() : 12
at::get_num_interop_threads() : 6
OpenMP 2019
omp_get_max_threads() : 12
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191125 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 12
Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
###Markdown
We can see that quantized model has significant speed up (close to 2x). Test Environment The following is the hardware of the test machine, and software version:
###Code
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
###Output
{
"gpu": {
"driver_version": "451.67",
"devices": [
{
"memory_total": 8589934592,
"memory_available": 8480882688,
"name": "GeForce GTX 1070"
}
]
},
"cpu": {
"brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz",
"cores": 6,
"logical_cores": 12,
"hz": "3.1920 GHz",
"l2_cache": "1536 KB",
"flags": [
"3dnow",
"3dnowprefetch",
"abm",
"acpi",
"adx",
"aes",
"apic",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflush",
"clflushopt",
"cmov",
"cx16",
"cx8",
"de",
"dtes64",
"dts",
"erms",
"est",
"f16c",
"fma",
"fpu",
"fxsr",
"hle",
"ht",
"hypervisor",
"ia64",
"invpcid",
"lahf_lm",
"mca",
"mce",
"mmx",
"movbe",
"mpx",
"msr",
"mtrr",
"osxsave",
"pae",
"pat",
"pbe",
"pcid",
"pclmulqdq",
"pdcm",
"pge",
"pni",
"popcnt",
"pse",
"pse36",
"rdrnd",
"rdseed",
"rtm",
"sep",
"serial",
"sgx",
"sgx_lc",
"smap",
"smep",
"ss",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"tm",
"tm2",
"tsc",
"vme",
"x2apic",
"xsave",
"xtpr"
],
"processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel"
},
"memory": {
"total": 16971276288,
"available": 6431543296
},
"python": "3.6.10.final.0 (64 bit)",
"os": "Windows-10-10.0.19041-SP0",
"onnxruntime": {
"version": "1.5.1",
"support_gpu": false
},
"onnxruntime_tools": {
"version": "1.4.4"
},
"pytorch": {
"version": "1.6.0+cpu",
"support_gpu": false,
"cuda": null
},
"tensorflow": {
"version": "2.3.0",
"git_version": "v2.3.0-rc2-23-gb36436b087",
"support_gpu": true
}
}
|
SARIMAX/hourly-weather-wind_speed.ipynb | ###Markdown
Seasonal Autoregressive Integrated Moving Average with Explanatory Variable (SARIMAX)The ARIMA model is a generalisation of an ARMA model that can be applied to non-stationary time series.The SARIMAX model is an modified and extended version of ARIMA that accounts for seasonality in the time series and includes independent predictor variables.
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from time import time
import statsmodels.api as sm
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.stattools import adfuller
matplotlib.rcParams['figure.figsize'] = (16, 9)
pd.options.display.max_columns = 999
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
df = pd.read_csv('../datasets/hourly-weather-wind_speed.csv', parse_dates=[0], index_col='DateTime')
print(df.shape)
df.head()
###Output
(5000, 36)
###Markdown
Define ParametersMake predictions for 24-hour period using a training period of four weeks.
###Code
dataset_name = 'Hourly Weather Wind Speed'
dataset_abbr = 'HWS'
model_name = 'SARIMAX'
context_length = 24*7*4 # Four weeks
prediction_length = 24
###Output
_____no_output_____
###Markdown
Define Error MetricThe seasonal variant of the mean absolute scaled error (MASE) will be used to evaluate the forecasts.
###Code
def calc_sMASE(training_series, testing_series, prediction_series, seasonality=prediction_length):
a = training_series.iloc[seasonality:].values
b = training_series.iloc[:-seasonality].values
d = np.sum(np.abs(a-b)) / len(a)
errors = np.abs(testing_series - prediction_series)
return np.mean(errors) / d
###Output
_____no_output_____
###Markdown
Example SARIMAX ModelExploration of how SARIMA models work using a single example time series.
###Code
ts_ex = 'ts10'
df_ex = df.loc[:, ts_ex]
# Plot data from first five days
df_ex.iloc[:24*5].plot();
###Output
_____no_output_____
###Markdown
Time Series DecompositionDecompose the example time series into trend, seasonal, and residual components.
###Code
fig = seasonal_decompose(df_ex.iloc[-500:], model='additive').plot()
###Output
_____no_output_____
###Markdown
There doesn't appear to be a consistent trend. We can run a Dicky-Fuller test to confirm the stationarity.
###Code
dftest = adfuller(df_ex.iloc[-500:], autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
dfoutput
###Output
_____no_output_____
###Markdown
The very low p-value confirms that the data is stationary. We can see that there is daily seasonality which we will capture in our SARIMAX model. Plot ACF and PACFThe Autocorrelation Function (ACF) is the correlation of a signal with a delayed copy of itself as a function of delay.The Partial Autocorrelation Function (PACF) is the partial correlation of a signal with a delayed copy of itself, controlling for the values of the time series at all shorter delays, as a function of delay.
###Code
fig, ax = plt.subplots(2)
ax[0] = sm.graphics.tsa.plot_acf(df_ex, lags=50, ax=ax[0])
ax[1] = sm.graphics.tsa.plot_pacf(df_ex, lags=50, ax=ax[1])
###Output
_____no_output_____
###Markdown
There is clearly daily seasonality. A seasonality of 24 hours will be used for the SARIMAX model. Differencing by 24 hours helps remove the seasonality:
###Code
fig, ax = plt.subplots(2)
ax[0] = sm.graphics.tsa.plot_acf(df_ex.diff(24).dropna(), lags=50, ax=ax[0])
ax[1] = sm.graphics.tsa.plot_pacf(df_ex.diff(24).dropna(), lags=50, ax=ax[1])
fig = seasonal_decompose(df_ex.diff(24).dropna(), model='additive').plot()
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
df_ex = pd.DataFrame(df_ex)
days = df_ex.index.dayofweek
dummy_days = pd.get_dummies(days)
dummy_days.columns = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']
dummy_days.index = df_ex.index
df_ex = pd.concat([df_ex, dummy_days], axis=1)
df_ex.head()
###Output
_____no_output_____
###Markdown
Build ModelAs SARIMA models can be slow to train, a SARIMAX(1,1,1)(1,1,1)24 model will be used, as this should provide reasonable performance across the time series. Optimised forecasts could be obtained by using a grid search methodology to derive the best performining parameters, as demonstrated in the ARIMA and ARIMAX notebooks, but this would be at the expense of much greater training times.
###Code
def runSARIMAX(time_series, test_length=prediction_length, train_length=context_length):
ts = time_series.iloc[-(test_length+train_length):]
ts_train = ts.iloc[:-test_length]
ts_test = ts.iloc[-test_length:]
sarimax = sm.tsa.SARIMAX(endog=ts_train.iloc[:, 0],
exog=ts_train.iloc[:, 1:],
order=(1,1,1),
seasonal_order=(1,1,1,24),
enforce_stationarity=False,
enforce_invertibility=False).fit()
summary = sarimax.summary()
fcst = sarimax.predict(start=ts.index[2], end=ts.index[-1],
exog=ts_test.iloc[:, 1:])
fcst = np.concatenate([np.array([0, 0]), fcst])
fcst = pd.DataFrame(data=fcst, index=ts.index, columns=['pred%s' % ts.columns[0][2:]])
return fcst, summary
import warnings
warnings.filterwarnings('ignore')
%%time
fcst, summary = runSARIMAX(df_ex)
df_ex = pd.concat([df_ex, fcst], axis=1)
print(summary)
# Example forecast
fcst0 = df_ex.copy()
fcst0['pred%s' % ts_ex[2:]][fcst0['pred%s' % ts_ex[2:]] < 0] = 0
fcst0.iloc[-4*prediction_length:, 0].plot(label='Actual', c='k', alpha=0.5)
fcst0.iloc[-4*prediction_length:, -1].plot(label='SARIMAX(1,1,1)(1,1,1)24', c='b', alpha=0.5)
plt.axvline(x=fcst0.index[-prediction_length], linestyle=':', linewidth=2, color='r', label='Start of test data')
plt.legend()
plt.title(ts_ex);
###Output
_____no_output_____
###Markdown
Evaluating SARIMAXTo evaluate SARIMAX, we will generate forecasts for each time series using the SARIMAX(1,1,1)(1,1,1)24 approach shown above. MASE and sMASE will be calculated for each individual time series, and the mean of all these scores will be used as overall accuracy metrics for SARIMAX on this dataset.
###Code
results = df.iloc[-(prediction_length+context_length):].copy()
tic = time()
for i, col in enumerate(df.columns):
if i % 10 == 0:
toc = time()
print("Running predictions for {}. Cumulative time: {:.1f} minutes.".format(col, (toc-tic)/60))
# Prepare DataFrame for selected column
dft = df.loc[:, col]
dft = pd.DataFrame(dft)
days = dft.index.dayofweek
dummy_days = pd.get_dummies(days)
dummy_days.columns = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']
dummy_days.index = dft.index
dft = pd.concat([dft, dummy_days], axis=1)
# Find best model
fcst, summary = runSARIMAX(dft)
# Add predictions to results DataFrame
results['pred%s' % col[2:]] = fcst.values
toc = time()
print("Finished! Total run time: {:.1f} minutes.".format((toc-tic)/60))
###Output
Running predictions for ts1. Cumulative time: 0.0 minutes.
Running predictions for ts11. Cumulative time: 17.7 minutes.
Running predictions for ts21. Cumulative time: 28.7 minutes.
Running predictions for ts31. Cumulative time: 33.9 minutes.
Finished! Total run time: 37.0 minutes.
###Markdown
Predictions must be greater than 0 and the actual values are always integers.
###Code
results0 = results.copy()
results0[results0 < 0] = 0
results0 = results0.apply(round)
results0.tail()
sMASEs = []
for i, col in enumerate(df.columns):
sMASEs.append(calc_sMASE(results0[col].iloc[-(context_length + prediction_length):-prediction_length],
results0[col].iloc[-prediction_length:],
results0['pred%s' % str(i+1)].iloc[-prediction_length:]))
fig, ax = plt.subplots()
ax.hist(sMASEs, bins=20)
ax.set_title('Distributions of sMASEs for {} dataset'.format(dataset_name))
ax.set_xlabel('sMASE')
ax.set_ylabel('Count');
sMASE = np.mean(sMASEs)
print("Overall sMASE: {:.4f}".format(sMASE))
###Output
Overall sMASE: 0.8652
###Markdown
Show some example forecasts.
###Code
fig, ax = plt.subplots(5, 2, sharex=True)
ax = ax.ravel()
for col in range(1, 11):
ax[col-1].plot(results0.index[-prediction_length:], results0['ts%s' % col].iloc[-prediction_length:],
label='Actual', c='k', linestyle='--', linewidth=1)
ax[col-1].plot(results0.index[-prediction_length:], results0['pred%s' % col].iloc[-prediction_length:],
label='SARIMAX(1,1,1)(1,1,1)24', c='b')
ax[9].legend()
fig.suptitle('{} Predictions'.format(dataset_name));
###Output
_____no_output_____
###Markdown
Store the predictions and accuracy score for the SARIMAX models.
###Code
import pickle
with open('{}-sMASE.pkl'.format(dataset_abbr), 'wb') as f:
pickle.dump(sMASE, f)
with open('../_results/{}/{}-results.pkl'.format(model_name, dataset_abbr), 'wb') as f:
pickle.dump(results.iloc[-prediction_length:], f)
###Output
_____no_output_____ |
notebooks/MedCab4Model.ipynb | ###Markdown
MedCab4Notebook co-authored by Brad Brauser and Peggy Krom
###Code
# Necessary imports
import re
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
import numpy as np
import spacy
# Dataset from Peggy's previous MedCab4 build
url1 = 'https://raw.githubusercontent.com/PeggyK1/med_cab3/main/data/strains.csv'
df1 = pd.read_csv(url1)
# Dropping unneeded column
df1 = df1.drop(columns = ['Unnamed: 0'])
#Preping it for tokenizatiopn
df1['name'] = df1['name'].replace('-', ' ', regex=True).str.lower()
# Dropping unneeded collumns
df1 = df1.drop(['id'], axis = 1)
df1.drop(df1.iloc[:, 5:43], inplace = True, axis = 1)
df1.head()
# Model I found from a previous study guide
url2 = 'https://raw.githubusercontent.com/bundickm/Study-Guides/master/data/cannabis.csv'
df2 = pd.read_csv(url2)
# Prepping for tokenization
df2['Strain'] = df2['Strain'].replace('-', ' ', regex=True)
df2 = df2.rename(columns = {'Strain':'name'})
df2['name'] = df2['name'].str.lower()
df2.head()
# Merging the two datasets
strains = pd.merge(df1, df2, on='name')
# Dropping NaN values
strains = strains.dropna()
# Dropping duplicate and unneeded columns
strains = strains.drop(columns = ['Type', 'Effects', 'Flavor'])
strains.head(10)
strains['all'] = strains['type'].str.cat(strains['effects'], sep = ", ")
strains['all'] = strains['all'].str.cat(strains['ailment'], sep = ", ")
strains['all'] = strains['all'].str.cat(strains['flavor'], sep = ", ")
strains.head()
from sklearn.model_selection import StratifiedKFold, RandomizedSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1, 3),
max_features=5000,
)
nn = KNeighborsClassifier(n_neighbors=10, algorithm='auto')
skf = StratifiedKFold(n_splits=2)
pipeline = Pipeline([
('vect', tfidf),
('clf', nn)
])
param_grid = {
'vect__stop_words': [None],
'vect__ngram_range': [(1,2), (1,3)],
'vect__min_df': (0, 0.15),
'vect__max_df': (0.55, 1.0),
}
gs = RandomizedSearchCV(estimator=pipeline, param_distributions=param_grid, cv=skf, n_jobs=-1, verbose=10, return_train_score=True)
feature = strains['all']
target = strains['name']
features = tfidf.fit_transform(feature)
features = pd.DataFrame(features.todense(), columns=tfidf.get_feature_names())
targets = tfidf.fit_transform(target)
targets = pd.DataFrame(targets.todense(), columns=tfidf.get_feature_names())
features.head()
example = pd.DataFrame({'ailment': ['insomnia'],
'type': ['indica'],
'effects': ['focused'],
'flavor': ['earthy']})
ex = tfidf.fit_transform(example)
ex
gs.fit(strains['all'], strains['name'])
gs.predict(example)
example2 = ['Insomnia, Grape']
gs.predict(example2)
from joblib import dump
dump(gs, 'gs_2.joblib', compress=True)
###Output
_____no_output_____ |
exercises/02_maximum_likelihood_estimation.ipynb | ###Markdown
DAY 2: Maximum Likelihood Estimation AM207: Advanced Scientific Computing Instructor: Weiwei Pan Due: September 8th, 11:59pm EST **Names of Group Members**: David Ma, [email protected] Will Seaton, [email protected],edu Minhuan Li, [email protected] Wu You, [email protected]) Preston Ching, [email protected]
Johannes Kolberg, [email protected] Learning Goals:1. empirically investigate the properties of maximum likelihood estimators.2. gain intuition on why the three desiderata of estimators (consistency, unbiasedness and minimum variance) are useful or important in practice.3. explore how to evaluate MLE models. Load necessary libraries
###Code
# import the necessary python libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
We include auxiliary functions here that we will need to use later
###Code
def generate_toy_data(n_samples=50, noise_var=0.5):
''' Generate toy data set for linear regression'''
n_test_samples = 100
f = lambda x: 0.5 * x + 10
x_train = np.sort(np.random.uniform(-5, 5, n_samples))
x_test = np.sort(np.random.uniform(-5, 5, n_test_samples))
y_train = f(x_train) + np.random.normal(0, noise_var**0.5, n_samples)
y_test = f(x_test) + np.random.normal(0, noise_var**0.5, n_test_samples)
return x_train, y_train, x_test, y_test
###Output
_____no_output_____
###Markdown
--- Problem 1: Maximum Likelihood Estimation of Parameters of a Univariate Normal DistributionSuppose that we have $N$ number of observed values $y_1, \ldots, y_N$. Let's assume that these are idenpendent observations of a normally distributed random variable $Y \sim \mathcal{N}(\mu, \sigma^2)$. Recall that the maximum likelihood estimator of the underlying normal distribution are given by:$$\begin{cases}\mu_{\text{MLE}} = \frac{1}{N} \sum_{n=1}^Ny_n &\\\sigma_{\text{MLE}} = \sqrt{\frac{1}{N}\sum_{n=1}^N(y_n - \mu)^2}&\end{cases}$$In this problem, you will explore the properties of maximum likelihood estimators.**Exercise 1:** Use empricial evidence, determine whether or not the MLE of the mean $\mu$ and variance $\sigma^2_{\text{MLE}}$ are:1. consistent2. unbiasedExplain why we care about consistency and bias? That is, what concrete task(s) can go wrong if we use estimators that are not consistent and/or are biased?
###Code
# Set constants for generating toy data
n_samples = 500 # number of training samples
noise_var = 0.5 # observation noise variance
# Generate training data
y_train = np.random.normal(0, noise_var**0.5, n_samples)
# Visualize the training data
fig, ax = plt.subplots(1, 1, figsize=(10, 5)) # make a figure with one row and one column of size 10x5
ax.hist(y_train, bins=50, color='blue', alpha=0.6, label='training data') # scatter plot the training data
ax.axvline(x=0, color='red', linestyle='dotted', label='true mean') # plot the true mean
ax.legend(loc='best') # plot legend
ax.set_title('training data') # set title
ax.set_xlabel('x') # set x label
ax.set_ylabel('y') # set y label
plt.show() # display the figure
# Check the consistency of MLE mean
mu_MLE = []
var_MLE = []
N_MIN = 20
N_MAX = 4000
STEP = 20
for n in range(N_MIN, N_MAX, STEP):
y_train = np.random.normal(0, noise_var**0.5, n)
mu_MLE.append(np.mean(y_train))
var_MLE.append((1/n)*sum((y_train - np.mean(y_train))**2))
# plot the MLE estimator of mean as a function of sample number
fig, ax = plt.subplots(2, 1, figsize = (10, 8))
ax[0].plot(range(N_MIN, N_MAX, STEP), mu_MLE,'b.',label='Mu')
ax[0].axhline(0, label = 'True')
ax[0].legend()
ax[0].set_title('Mean')
ax[1].plot(range(N_MIN, N_MAX, STEP), var_MLE,'b.',label='Var')
ax[1].axhline(0.5, label = 'True')
ax[1].legend()
ax[1].set_title('Variance')
#plt.hlines(0,N_MIN-10,N_MAX+10,colors='red',linestyles='dotted', label='True Value')
#plt.xlabel('Sample Number n')
#plt.ylabel('MLE Estimator of mean')
#plt.title('Check Consistency of MLE Estimator of mean')
plt.show()
###Output
_____no_output_____
###Markdown
Both parameters are consistent as larger sample sizes converge around the true parameter values.
###Code
# Check Unbiasedness of MLE Mean
SAMPLE_SIZE = 5
print(np.mean([np.mean(np.random.normal(0, 0.5**0.5, SAMPLE_SIZE)) for i in range(10000)]))
# Check Unbiasedness of MLE Variance
SAMPLE_SIZE = 5
print(np.mean([np.var(np.random.normal(0, 0.5**0.5, SAMPLE_SIZE)) for i in range(10000)]))
###Output
0.0005692984140775803
0.4000296572904525
###Markdown
$\hat\mu$ is unbiased as the expected value of resampling is the same as the true value. $\hat\sigma^2$ is *not* unbiased as the expected value differs: $0.4000 \neq 0.5$. **Answer** We care about consistency as otherwise, our estimators would not improve with increasing sample size (we have no motivation to take more than a few samples). We care about bias when we have small sample size and cannot get consistent estimators from increasing the sample. Using a biased estimator could lead to wrong interpretations.For bias, however, as long as we aware, can be accounted for by computing bounds for bias. In fact, a biased estimator with smaller variance may be preferred over an unbiased estimator. **Exercise 2:** Part of the modeling process is model evaluation. Assuming that you've successfully maximized the log-likelihood of the data, why would you need to evaluate the MLE model (i.e. isn't the model you learned already gauranteed to fit the data as best as possible?)?In the case of linear regression $p(y|x, \theta)$, we evaluate the fit of our MLE estimate $\theta_{\mathrm{MLE}}$ by computing the MSE. For models where we model the distribution of only one variable $p(y | \theta)$, how should we evaluate the fit of $\theta_{\mathrm{MLE}}$?What is hard about evaluating the fit of $\theta_{\mathrm{MLE}}$ for models like $p(y | \theta)$? Is the same difficulty present in the case of linear regression? **Answer:** It is important to evaluate a model because there is no guarantee that the model specified even fits the data (i.e., we calculated the MLE assume a Gaussian when the actual distribution is something else entirely). Furthermore, depending on our task, fitting the data may not be sufficient - we may wish to incorporate future data or make predictions, and just having MLE is insufficient for these tasks. We can evaluate the fit of $\theta_{\mathrm{MLE}}$ by calculating its confidence interval via bootstrapping. This allows us to quantify the uncertainty of our estimator. However, it is difficult to evaluate the fit since the length of the confidence interval does not tell us much about the fit. What we can do is compare whether the confidence intervals of different estimators overlap. --- Problem 2: Maximum Likelihood Estimation of Parameters of Linear Regression ModelIn this problem, you will explore the properties of MLE of linear regression parameters. **Exercise 3:** Empricially determine whether or not the MLE of linear regression parameters are:1. consistent2. unbiased
###Code
# Set constants for generating toy data
n_samples = 20 # number of training samples
noise_var = 0.5 # observation noise variance
# Generate training data
x_train, y_train, _, _ = generate_toy_data(n_samples=n_samples, noise_var=noise_var)
# Visualize the training data
fig, ax = plt.subplots(1, 1, figsize=(10, 5)) # make a figure with one row and one column of size 10x5
ax.scatter(x_train, y_train, color='blue', alpha=0.8, label='training data') # scatter plot the training data
ax.legend(loc='best') # plot legend
ax.set_title('training data') # set title
ax.set_xlabel('x') # set x label
ax.set_ylabel('y') # set y label
plt.show() # display the figure
np.random.seed(42)
a_MLE = []
b_MLE = []
N_MIN = 50
N_MAX = 3000
STEP = 50
for n in range(N_MIN, N_MAX, STEP):
x_train, y_train, _, _ = generate_toy_data(n_samples=n, noise_var=0.5)
lr = LinearRegression().fit(x_train.reshape((-1, 1)), y_train.reshape((-1, 1)))
slope_mle = lr.coef_[0][0]
intercept_mle = lr.intercept_[0]
a_MLE.append(intercept_mle)
b_MLE.append(slope_mle)
fig, ax = plt.subplots(2, 1, figsize = (10, 8))
ax[0].plot(range(N_MIN, N_MAX, STEP), a_MLE,'b.',label='MLE')
ax[0].axhline(10, label = 'True')
ax[0].legend()
ax[0].set_title('Intercept')
ax[1].plot(range(N_MIN, N_MAX, STEP), b_MLE,'b.',label='MLE')
ax[1].axhline(0.5, label = 'True')
ax[1].legend()
ax[1].set_title('Slope')
###Output
_____no_output_____
###Markdown
**Answer**: Consistent, as larger sample sizes converge around the true parameter value for both parameters.
###Code
np.random.seed(1337)
n_samples = 10
intercepts = []
slopes = []
for i in range(1000):
x_train, y_train, _, _ = generate_toy_data(n_samples=n_samples, noise_var=0.5)
lr = LinearRegression().fit(x_train.reshape((-1, 1)), y_train.reshape((-1, 1)))
slope_mle = lr.coef_[0][0]
intercept_mle = lr.intercept_[0]
intercepts.append(intercept_mle)
slopes.append(slope_mle)
print('Intercept: {}\nSlope: {}'.format(np.mean(intercepts), np.mean(slopes)))
###Output
Intercept: 10.010877922889762
Slope: 0.49883416315462786
###Markdown
**Answer**: Unbiased as the expected value of resampling is the same as the true value. **Exercise 4:** Empirically investigate the variance of the MLE of linear regression parameters. Specifically, describe which factors impact the variance and how. *Hint:* think about the impact of all the data generating parameters you can control. Explain why we care about variance in practice? That is, what concrete task(s) can go wrong if we use estimators that have high variance?
###Code
print('Intercpet: {}\nSlope: {}'.format(np.var(intercepts), np.var(slopes)))
###Output
Intercpet: 0.056016237103919314
Slope: 0.007154171967911464
###Markdown
**Answer**:* Sample size: as sample size increases the variance of our estimator should fall.* Noise generation: larger variance in the noise yields larger vairance in the estimator.Intuitively, the variance of an estimator tells much how much spread there is in the estimates produced by that estimator. If we use estimators with high variance, then they become overly reliant on the variance of the noise and small changes in data and run the risk of overfitting.
###Code
# Visualize the MLE model
fig, ax = plt.subplots(1, 1, figsize=(10, 5)) # make a figure with one row and one column of size 10x5
ax.scatter(x_train, y_train, color='blue', alpha=0.8, label='training data') # scatter plot the training data
# Plot the MLE model for random samples of the training set
n_trials = 50
for i in range(n_trials):
x_train, y_train, _, _ = generate_toy_data(n_samples=n_samples, noise_var=noise_var) # generate a random training samples from the true distribution over x
linear_regressor.fit(x_train.reshape(-1, 1),y_train.reshape(-1, 1)) # fit model to the training data
slope_mle = linear_regressor.coef_[0][0])# extract the MLE for slope
intercept_mle = linear_regressor.intercept_[0] # extract the MLE for intercept
y_train_pred = linear_regressor.predict(x_train.reshape(-1, 1)) # make predictions on training data
if i == 0:
ax.plot(x_train, y_train_pred, color='red', alpha=0.2, label='MLE model on random sample of training data') # plot the learned linear regression function by plotting the predictions
else:
ax.plot(x_train, y_train_pred, color='red', alpha=0.2) # plot the learned linear regression function by plotting the predictions
ax.set_title('Visualization of the MLE model and training data') # set title
ax.legend(loc='best') # display legend
ax.set_xlabel('x') # set x label
ax.set_ylabel('y') # set y label
plt.show() # display the figure
###Output
_____no_output_____ |
galactic/ZTF18abktckv.ipynb | ###Markdown
Fink case study: Galactic Science GoalThis notebook is a study of [ZTF18abktckv](https://fink-portal.org/ZTF18abktckv). Useful links- API documentation: https://fink-portal.org/api- Schema of Fink database: https://fink-portal.org/api/v1/columns- CDS xmatch service: http://cdsxmatch.u-strasbg.fr/xmatch- SIMBAD description of classes: http://simbad.u-strasbg.fr/simbad/sim-display?data=otypes- LIA: https://github.com/dgodinez77/LIA Environment set upTo run this notebook, you need to import the following libraries (some are already installed in colab):
###Code
# !pip install seaborn
# !pip install fink_science
# !pip install astropy
# !pip install pyLIMA
import io
import requests
import pandas as pd
import numpy as np
from fink_science.conversion import dc_mag
from astropy.coordinates import SkyCoord
from pyLIMA import event
from pyLIMA import telescopes
from pyLIMA import microlmodels, microltoolbox
from pyLIMA.microloutputs import create_the_fake_telescopes
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnchoredText
import seaborn as sns
sns.set_context('talk')
def estimateGaiaError(mag):
""" Estimate Gaia error from magnitude
"""
a1=0.2
b1= -5.3#-5.2
log_err1 = a1*mag + b1
a2=0.2625
b2= -6.3625#-6.2625
log_err2 = a2*mag + b2
if (mag<13.5): expectedStdAtBaselineMag = 10**(a1*13.5+b1)
if (mag>=13.5 and mag<17) : expectedStdAtBaselineMag = 10**log_err1
if (mag>=17) : expectedStdAtBaselineMag = 10**log_err2
#this works until 21 mag.
return expectedStdAtBaselineMag*1
def get_model(current_event, mjd=True):
""" Get time and magnitude from fitted model
"""
# Model
results = current_event.fits[0]
create_the_fake_telescopes(results, results.fit_results)
telescope_ = results.event.fake_telescopes[0]
flux_model = mulens_model.compute_the_microlensing_model(
telescope_, results.model.compute_pyLIMA_parameters(results.fit_results)
)[0]
time = telescope_.lightcurve_flux[:, 0]
magnitude = microltoolbox.flux_to_magnitude(flux_model)
if mjd:
time = np.array([t - 2400000.5 for t in time])
# params = results.fit_results
# print(params)
return time, magnitude
###Output
_____no_output_____
###Markdown
Case study: Gravitational microlensing We deployed a science module to find (early) gravitational microlensing events. The microlensing classification module is based on the Lens Identification Algorithm (LIA) presented in [Godines et al. (2019)](). In short, a Random Forest algorithm is trained with simulated light-curves similar in cadence and noise to the associated survey of interest (currently ZTF). Of course, we receive _alerts_ with limited photometry size (up to 30 days with ZTF), so the game is very challenging! Association criterion and rateAn event is considered as Microlensing candidate if the classifier simultaneously favoured microlensing in all available bands (`g`, `r`). In addition, we make a cut on the number of times the light of this event has varied (at 3 sigma) since the beginning of the survey. This last cut is here to removes variable stars with long-trend.
###Code
# Get all latests alerts associated to Microlensing
r = requests.post(
'https://fink-portal.org/api/v1/latests',
json={
'class': 'Microlensing candidate',
'n': '5000',
}
)
# Format output in a DataFrame
pdf_mulens = pd.read_json(io.BytesIO(r.content))
print(len(pdf_mulens))
###Output
_____no_output_____
###Markdown
We have currently ~4000 alerts of data flagged as Microlensing candidates. Let's see if they are associated to a known transient in SIMBAD:
###Code
pdf_mulens.groupby(by='d:cdsxmatch')['d:mulens'].count()
###Output
_____no_output_____
###Markdown
A priori no (`cdsxmatch = Unknown`). Note that `cdsxmatch = Fail` means we couldn't perform the crossmatch with SIMBAD in real time (downtime, or network error, or ...). We usually recompute it on a later time.
###Code
len(pdf_mulens.groupby(by='i:objectId').count())
###Output
_____no_output_____
###Markdown
There are 1418 unique objects with 4355 alerts. Closer look to a candidateWe could look at all candidates, but let's focus only on one promising candidate:
###Code
r = requests.post(
'https://fink-portal.org/api/v1/objects',
json={
'objectId': 'ZTF18abktckv',
'withupperlim': 'True',
}
)
# Format output in a DataFrame
pdf_mulens_single = pd.read_json(io.BytesIO(r.content))
###Output
_____no_output_____
###Markdown
This candidate is located far from the galactic plane:
###Code
gal = SkyCoord(pdf_mulens['i:ra'], pdf_mulens['i:dec'], unit='deg').galactic
fig = plt.figure(figsize=(15, 10))
ax = plt.subplot(projection='aitoff')
plt.scatter(gal.l.wrap_at('180d').radian, gal.b.radian, color='C0', alpha=1, marker='.')
# Add ZTF18abktckv
gal_single = SkyCoord(pdf_mulens_single['i:ra'], pdf_mulens_single['i:dec'], unit='deg').galactic
plt.scatter(
gal_single.l.wrap_at('180d').radian,
gal_single.b.radian, color='C1', alpha=1, marker='*', s=200)
# Faking the region enclosing the galactic plane
x = np.arange(-180, 180, 0.1)
plt.plot(x, [20*np.pi/180]*len(x), color='C3', alpha=0.5)
plt.plot(x, [-20*np.pi/180]*len(x), color='C3', alpha=0.5)
plt.grid();
pdf_mulens_single = pdf_mulens_single.sort_values('i:jd')
mjd = pdf_mulens_single['i:jd'].apply(lambda x: x - 2400000.5)
fig = plt.figure(figsize=(15, 5))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_mulens_single['i:fid']):
maskFilt = pdf_mulens_single['i:fid'] == filt
# The column `d:tag` is used to check data type
maskValid = pdf_mulens_single['d:tag'] == 'valid'
plt.errorbar(
pdf_mulens_single[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_mulens_single[maskValid & maskFilt]['i:magpsf'],
pdf_mulens_single[maskValid & maskFilt]['i:sigmapsf'],
ls = '', marker='o', color=colordic[filt], label='{} band'.format(filtdic[filt])
)
maskUpper = pdf_mulens_single['d:tag'] == 'upperlim'
plt.plot(
pdf_mulens_single[maskUpper & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_mulens_single[maskUpper & maskFilt]['i:diffmaglim'],
ls='', marker='v', color=colordic[filt], markerfacecolor='none'
)
maskBadquality = pdf_mulens_single['d:tag'] == 'badquality'
plt.errorbar(
pdf_mulens_single[maskBadquality & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_mulens_single[maskBadquality & maskFilt]['i:magpsf'],
pdf_mulens_single[maskBadquality & maskFilt]['i:sigmapsf'],
ls='', marker='^', color=colordic[filt]
)
# Code might be shorter if we collect 'valid', 'upperquality' and 'badquality' into a single list (Petro)
# Highlight dates where it was flagged as an ML event
c0 = pdf_mulens_single['d:mulens'] > 0.0
jd0 = np.min(pdf_mulens_single[c0]['i:jd'].values)
minjd = np.min(pdf_mulens_single[c0]['i:jd'].values) - 30
maxjd = np.max(pdf_mulens_single[c0]['i:jd'].values)
plt.axvline(jd0 - 2400000.5, color='black', ls='--')
plt.axvline(minjd - 2400000.5, color='C3')
plt.axvline(maxjd - 2400000.5, color='C3')
plt.fill_betweenx([10, 25], minjd - 2400000.5, maxjd - 2400000.5, alpha=0.1, color='black')
# Why not convert all the dates at the beginning? (Petro)
plt.ylim(12, 22)
plt.gca().invert_yaxis()
plt.legend()
# plt.title(
# 'Object {}'.format(
# pdf_mulens_single['i:objectId'].values[0]
# )
# )
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Difference Magnitude');
###Output
_____no_output_____
###Markdown
_Circles (&9679;) with error bars show valid alerts that pass the Fink quality cuts. Upper triangles with errors (&9650;) represent alert measurements that do not satisfy Fink quality cuts, but are nevetheless contained in the history of valid alerts and used by Fink science modules. Lower triangles (&9661;) represent 5-sigma mag limit in difference image based on PSF-fit photometry contained in the history of valid alerts._The data in between the red lines is favoured as Microlensing by the classifier, and the first microlensing trigger is shown with the black line (recall alerts carry up to 30 days of history). On the Fink Science Portal, you can then try to extract Microlensing paramaters. The fit is done using [pyLIMA](https://github.com/ebachelet/pyLIMA) described in [Bachelet et al (2017)](https://ui.adsabs.harvard.edu/abs/2017AJ....154..203B/abstract). We used a simple PSPL model to fit the data. Here is a [link](https://fink-portal.org/ZTF18abktckv) in the portal for this event. Try pressing on "Microlensing" in upper right corner and "Fit data" on the right.Let's try a different fit corresponding to Uniform-Source Binary Lens model:
###Code
# Take only valid measurements
pdf = pdf_mulens_single[pdf_mulens_single['d:tag'] == 'valid'].sort_values('i:jd', ascending=False)
# Use DC magnitude instead of difference mag
mag_dc, err_dc = np.transpose(
[
dc_mag(*args) for args in zip(
pdf['i:fid'].astype(int).values,
pdf['i:magpsf'].astype(float).values,
pdf['i:sigmapsf'].astype(float).values,
pdf['i:magnr'].astype(float).values,
pdf['i:sigmagnr'].astype(float).values,
pdf['i:magzpsci'].astype(float).values,
pdf['i:isdiffpos'].values
)
]
)
# pyLIMA magic
current_event = event.Event()
current_event.name = pdf['i:objectId'].values[0]
current_event.ra = pdf['i:ra'].values[0]
current_event.dec = pdf['i:dec'].values[0]
filts = {1: 'g', 2: 'r'}
for fid in np.unique(pdf['i:fid'].values):
mask = pdf['i:fid'].values == fid
telescope = telescopes.Telescope(
name='ztf_{}'.format(filts[fid]),
camera_filter=format(filts[fid]),
light_curve_magnitude=np.transpose(
[
pdf['i:jd'].values[mask],
mag_dc[mask],
err_dc[mask]
]
),
light_curve_magnitude_dictionnary={
'time': 0,
'mag': 1,
'err_mag': 2
}
)
current_event.telescopes.append(telescope)
# USBL model
mulens_model = microlmodels.create_model('USBL', current_event)
# USBL is a 7 parameters model
mulens_model.parameters_guess = [
2459438.8042624635, 0.28967950357242556,
28.54840874346009, 0.04989598439800191,
0.272393673849404, -2.8730822458911205,
0.23513925488422255-np.pi
]
# Let's use the TRF method
current_event.fit(mulens_model, 'TRF')
# Number of degrees of freedom
dof = len(pdf) - len(mulens_model.parameters_guess) - 1
results = current_event.fits[0]
normalised_lightcurves = microltoolbox.align_the_data_to_the_reference_telescope(results, 0, results.fit_results)
# Model
create_the_fake_telescopes(results, results.fit_results)
telescope_ = results.event.fake_telescopes[0]
flux_model = mulens_model.compute_the_microlensing_model(telescope_, results.model.compute_pyLIMA_parameters(results.fit_results))[0]
time = telescope_.lightcurve_flux[:, 0]
magnitude = microltoolbox.flux_to_magnitude(flux_model)
###Output
_____no_output_____
###Markdown
Finally plot the fit on top of the (rescaled) measurements:
###Code
results.produce_outputs()
plt.show()
fig = plt.figure(figsize=(15, 10))
gs = GridSpec(2, 1, height_ratios=[3, 1], hspace=0.05)
ax1 = fig.add_subplot(gs[0])
ax2 = fig.add_subplot(gs[1], sharex=ax1)
name_telescopes = ['ZTF/Fink (g)', 'ZTF/Fink (r)']
for ax in [ax1]:
for index, name in enumerate(name_telescopes):
ax.errorbar(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
normalised_lightcurves[index][:, 1],
normalised_lightcurves[index][:, 2],
ls='',
marker='o',
markersize=3,
label=name
)
if index == 0:
ax.plot(
*get_model(current_event),
color='black', ls='--', alpha=0.5
)
plt.setp(ax1.get_xticklabels(), visible=False)
ax1.set_ylabel('Magnitude');
ax1.invert_yaxis()
ax1.legend(ncol=2)
ax1.grid(alpha=0.5)
axins.set_xlim(59350, 59550)
axins.invert_yaxis()
t_model, mag_model = get_model(current_event)
for index, name in enumerate(name_telescopes):
mag_inter = np.interp(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
t_model,
mag_model
)
ax2.errorbar(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
mag_inter - normalised_lightcurves[index][:, 1],
normalised_lightcurves[index][:, 2],
ls='',
marker='o',
markersize=3,
label=name
)
ax2.grid(alpha=0.5)
ax2.set_xlabel('Modified Julian Date [UTC]');
ax2.set_ylim(-0.2, 0.2)
ax2.set_ylabel('Difference mag');
# fitted parameters
names = results.model.model_dictionnary
params = results.fit_results
err = np.diag(np.sqrt(results.fit_covariance))
msg = """
# Fitted parameters
t0: {:.2f} +/- {:.2f} (MJD)
tE: {:.2f} +/- {:.2f} (days)
u0: {:.2f} +/- {:.2f}
rho: {:.2f} +/- {:.2f}
logs: {:.2f} +/- {:.2f}
logq: {:.2f} +/- {:.2f}
alpha: {:.2f} +/- {:.2f}
fs_ztf_g: {:.2f} +/- {:.2f}
g_ztf_g: {:.2f} +/- {:.2f}
fs_ztf_r: {:.2f} +/- {:.2f}
g_ztf_r: {:.2f} +/- {:.2f}
chi2/dof: {:.2f}
""".format(
params[names['to']] - 2400000.5,
err[names['to']],
params[names['tE']],
err[names['tE']],
params[names['uo']],
err[names['uo']],
params[names['rho']],
err[names['rho']],
params[names['logs']],
err[names['logs']],
params[names['logq']],
err[names['logq']],
params[names['alpha']],
err[names['alpha']],
params[names['fs_ztf_g']],
err[names['fs_ztf_g']],
params[names['g_ztf_g']],
err[names['g_ztf_g']],
params[names['fs_ztf_r']],
err[names['fs_ztf_r']],
params[names['g_ztf_r']],
err[names['g_ztf_r']],
params[-1] / dof
)
print(msg)
###Output
_____no_output_____
###Markdown
Not too bad - although we need much more data to conclude on the nature of the object ;-) We didn't have baseline but now we will take it from ZTF data. Inspecting the event using the full ZTF lightcurveTo futher check this event, we can query its data using the ZTF lightcurve API:
###Code
maskNone = pdf_mulens_single['d:tag'] == 'valid'
ra0 = np.mean(pdf_mulens_single[maskNone]['i:ra'].values)
dec0 = np.mean(pdf_mulens_single[maskNone]['i:dec'].values)
r = requests.post(
'https://irsa.ipac.caltech.edu/cgi-bin/ZTF/nph_light_curves',
data={'POS': 'CIRCLE {} {} 0.0004'.format(ra0, dec0),
'BAD_CATFLAGS_MASK': 32768,
'FORMAT': 'csv'
}
)
pdf_ZTF = pd.read_csv(io.StringIO(r.text))
pdf_mulens_single = pdf_mulens_single.sort_values('i:jd')
mjd = pdf_mulens_single['i:jd'].apply(lambda x: x - 2400000.5)
fig = plt.figure(figsize=(15, 6))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_mulens_single['i:fid']):
maskFilt = pdf_mulens_single['i:fid'] == filt
# The column `d:tag` is used to check data type
maskValid = pdf_mulens_single['d:tag'] == 'valid'
# Use DC magnitude
mag_dc, err_dc = np.transpose(
[
dc_mag(*args) for args in zip(
pdf_mulens_single[maskValid & maskFilt]['i:fid'].astype(int).values,
pdf_mulens_single[maskValid & maskFilt]['i:magpsf'].astype(float).values,
pdf_mulens_single[maskValid & maskFilt]['i:sigmapsf'].astype(float).values,
pdf_mulens_single[maskValid & maskFilt]['i:magnr'].astype(float).values,
pdf_mulens_single[maskValid & maskFilt]['i:sigmagnr'].astype(float).values,
pdf_mulens_single[maskValid & maskFilt]['i:magzpsci'].astype(float).values,
pdf_mulens_single[maskValid & maskFilt]['i:isdiffpos'].values
)
]
)
plt.errorbar(
pdf_mulens_single[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
mag_dc,
err_dc,
ls = '', marker='o', color=colordic[filt],
label='{} band'.format(filtdic[filt])
)
# Highlight dates where it was flagged as an ML event
c0 = pdf_mulens_single['d:mulens'] > 0.0
jd0 = np.min(pdf_mulens_single[c0]['i:jd'].values)
minjd = np.min(pdf_mulens_single[c0]['i:jd'].values) - 30
maxjd = np.max(pdf_mulens_single[c0]['i:jd'].values)
plt.axvline(jd0 - 2400000.5, color='black', ls='--')
plt.axvline(minjd - 2400000.5, color='C3')
plt.axvline(maxjd - 2400000.5, color='C3')
plt.fill_betweenx([15, 25], minjd - 2400000.5, maxjd - 2400000.5, alpha=0.1, color='black')
colordic = {'zg': 'C0', 'zr': 'C1', 'zi': 'C2'}
for filt in np.unique(pdf_ZTF['filtercode']):
maskFilt = pdf_ZTF['filtercode'] == filt
plt.errorbar(
pdf_ZTF[maskFilt]['mjd'],
pdf_ZTF[maskFilt]['mag'],
pdf_ZTF[maskFilt]['magerr'],
ls='', color=colordic[filt], alpha=0.5,
label='ZTF DR {} band'.format(filt))
plt.ylim(16, 14)
#plt.gca().invert_yaxis()
plt.legend(ncol=3)
plt.title(
'Object {}'.format(
pdf_mulens_single['i:objectId'].values[0]
)
)
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Magnitude');
###Output
_____no_output_____
###Markdown
Certainly not a variable star! Perfect! Now we will test the model again but this time we will have the baseline.
###Code
# pyLIMA magic
current_event = event.Event()
current_event.name = 'ZTF18abktckv'
current_event.ra = pdf_ZTF['ra'].values[0]
current_event.dec = pdf_ZTF['dec'].values[0]
filts = {'zg': 'g', 'zr': 'r', 'zi': 'i'}
for fid in ['zg','zr', 'zi']:
mask = pdf_ZTF['filtercode'].values == fid
telescope = telescopes.Telescope(
name='ztf_{}'.format(filts[fid]),
camera_filter=format(filts[fid]),
light_curve_magnitude=np.transpose(
[
pdf_ZTF['mjd'].values[mask]+2400000.5,
pdf_ZTF['mag'][mask],
pdf_ZTF['magerr'][mask]
]
),
light_curve_magnitude_dictionnary={
'time': 0,
'mag': 1,
'err_mag': 2
}
)
current_event.telescopes.append(telescope)
### Gaia
lightcurve = np.loadtxt('./Gaia.dat',dtype=str)
mask = (lightcurve[:,1]=='untrusted') | (lightcurve[:,1]=='null')
lightcurve = lightcurve[~mask]
lightcurve = lightcurve[:,[0,1]].astype(float)
errors = [estimateGaiaError(i) for i in lightcurve[:,1]]
lightcurve = np.c_[lightcurve,errors]
telescope = telescopes.Telescope(
name='Gaia',
camera_filter='G',
light_curve_magnitude=lightcurve
)
current_event.telescopes.append(telescope)
# USBL model -- TRF
mulens_model = microlmodels.create_model('USBL', current_event)
mulens_model.parameters_guess = [2459438.8042624635, 0.28967950357242556, 28.54840874346009, 0.04989598439800191, 0.272393673849404, -2.8730822458911205, 0.23513925488422255-np.pi]
current_event.fit(mulens_model, 'TRF')
current_event.fits[0].produce_outputs()
plt.show()
# USBL model -- MCMC
mulens_model = microlmodels.create_model('USBL', current_event)
mulens_model.parameters_guess = [2459438.8042624635, 0.28967950357242556, 28.54840874346009, 0.04989598439800191, 0.272393673849404, -2.8730822458911205, 0.23513925488422255-np.pi]
current_event.fit(mulens_model, 'MCMC')
current_event.fits[1].produce_outputs()
plt.show()
# 7 parameters model (USBL)
dof = len(pdf_ZTF) - len(mulens_model.parameters_guess) - 1
results = current_event.fits[0]
normalised_lightcurves = microltoolbox.align_the_data_to_the_reference_telescope(
results, 0, results.fit_results)
# Model
create_the_fake_telescopes(results, results.fit_results)
telescope_ = results.event.fake_telescopes[0]
flux_model = mulens_model.compute_the_microlensing_model(telescope_, results.model.compute_pyLIMA_parameters(results.fit_results))[0]
time = telescope_.lightcurve_flux[:, 0]
magnitude = microltoolbox.flux_to_magnitude(flux_model)
# fitted parameters
names = results.model.model_dictionnary
params = results.fit_results
err = np.diag(np.sqrt(results.fit_covariance))
l = []
for name in ['to', 'uo', 'tE', 'rho', 'logs', 'logq', 'alpha']:
l.append(getattr(current_event.fits[1].outputs.fit_parameters, name))
print(l)
# restore default plot settings
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
fig = plt.figure(figsize=(15, 10))
gs = GridSpec(2, 1, height_ratios=[4, 1], hspace=0.05)
ax1 = fig.add_subplot(gs[0])
ax2 = fig.add_subplot(gs[1], sharex=ax1)
axins = ax1.inset_axes([0.15, 0.3, 0.65, 0.5])
name_telescopes = ['ZTF (g)', 'ZTF (r)', 'ZTF (i)', 'Gaia']
# name_telescopes = ['ZTF (g)', 'ZTF (r)', 'ZTF (i)']
for ax in [ax1, axins]:
for index, name in enumerate(name_telescopes):
ax.errorbar(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
normalised_lightcurves[index][:, 1],
normalised_lightcurves[index][:, 2],
ls='',
marker='o',
markersize=3,
label=name
)
if index == 0:
ax.plot(
*get_model(current_event),
color='black', ls='--', alpha=0.5
)
plt.setp(ax1.get_xticklabels(), visible=False)
ax1.set_ylabel('Magnitude');
ax1.invert_yaxis()
ax1.legend(ncol=2)
ax1.grid(alpha=0.5)
axins.set_xlim(59350, 59550)
axins.invert_yaxis()
t_model, mag_model = get_model(current_event)
for index, name in enumerate(name_telescopes):
mag_inter = np.interp(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
t_model,
mag_model
)
ax2.errorbar(
[t - 2400000.5 for t in normalised_lightcurves[index][:, 0]],
mag_inter - normalised_lightcurves[index][:, 1],
normalised_lightcurves[index][:, 2],
ls='',
marker='o',
markersize=3,
label=name
)
ax2.grid(alpha=0.5)
ax2.set_xlabel('Modified Julian Date [UTC]');
ax2.set_ylim(-0.1, 0.1)
ax2.set_ylabel('Difference mag');
msg = """
# Fitted parameters
t0: {:.2f} +/- {:.2f} (MJD)
tE: {:.2f} +/- {:.2f} (days)
u0: {:.2f} +/- {:.2f}
rho: {:.2f} +/- {:.2f}
logs: {:.2f} +/- {:.2f}
logq: {:.2f} +/- {:.2f}
alpha: {:.2f} +/- {:.2f}
fs_ztf_g: {:.2f} +/- {:.2f}
g_ztf_g: {:.2f} +/- {:.2f}
fs_ztf_r: {:.2f} +/- {:.2f}
g_ztf_r: {:.2f} +/- {:.2f}
chi2/dof: {:.2f}
""".format(
params[names['to']],
err[names['to']],
params[names['tE']],
err[names['tE']],
params[names['uo']],
err[names['uo']],
params[names['rho']],
err[names['rho']],
params[names['logs']],
err[names['logs']],
params[names['logq']],
err[names['logq']],
params[names['alpha']],
err[names['alpha']],
params[names['fs_ztf_g']],
err[names['fs_ztf_g']],
params[names['g_ztf_g']],
err[names['g_ztf_g']],
params[names['fs_ztf_r']],
err[names['fs_ztf_r']],
params[names['g_ztf_r']],
err[names['g_ztf_r']],
params[-1] / dof
)
print(msg)
###Output
_____no_output_____ |
model_code/xgboost_final_empatica.ipynb | ###Markdown
In this script we want to quantitatively assess how our model performs on data we collected ourselves using the Empatica wristband. We use the same paramaters as the final XGBoost model which wasa trained on the full JSI dataset, but we train on a smaller set of non-feature expanded data, which is analgous to the data we obtain from the Empatica wristband.
###Code
import numpy as np
import pandas as pd
import os
from sklearn.metrics import mean_squared_error as MSE
from sklearn.metrics import mean_absolute_error as MAE
from sklearn.model_selection import train_test_split
import xgboost as xgb
import time
import matplotlib.pyplot as plt
# All subject data included in one .csv file 'pwrtbl_all.csv'
# This data has had outliers removed from sensors and has had smoothing applied
# The person division occur at the following points
# Person A [0:796219] or [:796219]
# Person B [A:1276358]
# Person C [B:1804959]
# Person D [C:2311275]
# Person E [D:2847245]
# Person F [E:3245064]
# Person G [F:3763122]
# Person H [G:4160941]
# Person I [H:4712016]
# Person J [I:5147172] or [I:]
# Load the data
# Loading this 5M line .csv file in with pandas and then converting to numpy is faster than directly loading into numpy with np.genfromtxt()
dataraw = pd.read_csv('tbl_all.csv')
# Exclude all the extraneous data so we just have signals that are easily obtained with the empatica sensor as well
dataraw = dataraw[['ZephyrHR', 'WRIST_accx', 'WRIST_accy', 'WRIST_accz', 'BodyGSR', 'BodyST', 'COSMED']]
# Convert to numpy array
dataraw = dataraw.to_numpy()
# Just splitting the people into separate arrays
divisions = [0, 796219, 1276358, 1804959, 2311275, 2847245, 3245064, 3763122, 4160941, 4712016, 5147172]
data = []
for i in range(0,len(divisions)-1):
data.append(dataraw[divisions[i]:divisions[i+1],:])
tr = []; ts = []
# Define sets describing who is included in the training and testing sets
fullset = set({0,1,2,3,4,5,6,7,8,9})
trainset = set({0,1,2,3,4,5,6,7})
for i in trainset:
tr.append(data[i])
# Set difference to find the persons in the test set
for i in fullset - trainset:
ts.append(data[i])
# Now concatenate the training and testing sets each into a continuous array
tr = np.concatenate(tr, axis = 0)
ts = np.concatenate(ts, axis = 0)
# Break into the X and y arrays for train and test
# Last columns corresponds to the MET value
Xtr = tr[:,:-1]; ytr = tr[:,-1]
Xts = ts[:,:-1]; yts = ts[:,-1]
# Cleaning up all the previous arrays to save memory
del dataraw, data, tr, ts
# Now we will train an XGBoost model
# There are a lot of parameters here, and it is important to understand what each of them does when building our model
# Learning_rate - boosting learning rate, how quickly and strongly the learners are added to the ensemble
# Colsample_bytree - percentage of columns randomly sampled for each tree or estimator
# Max_depth - maximum depth per tree. USed as a way to tune the "weakness" of the learners. In general this value is very low between 1 to 5
# N_estimators - number of estimators or decision trees that comprise the overall ensemble
# Reg_alpha - L1 regularization weight
# Reg_lambda - L2 regularization weight
# Gamma - min split loss, essentially the gain a potnetial split must provide to be considered. This effectively prunes the trees and prevents them from overfitting with meaningless splits
start = time.time()
mdl = xgb.XGBRegressor( \
learning_rate = 0.05, \
colsample_bytree = 0.5, \
max_depth = 5, \
reg_alpha = 0, \
reg_lambda = 1, \
gamma = 50, \
n_estimators = 200, \
verbosity = 1 \
).fit(Xtr,ytr)
pred = mdl.predict(Xts)
end = time.time()
print('RMSE:',np.sqrt(MSE(yts,pred)),'\tMAE:',MAE(yts,pred), '\tTime: ', (end - start))
plt.figure(figsize = (10,7))
plt.plot(pred, label = 'Actual MET')
plt.plot(yts, label = 'Predicted MET')
plt.xlabel('Instance'); plt.ylabel('Normalized MET'); plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
ROC Curve
###Code
from sklearn.metrics import roc_curve, auc, accuracy_score, precision_score, recall_score
# To construct a ROC curve we need to convert this regression problem into a classification problem. Since we normalized our data, we know it to be centered around 0, thus I will classify MET values above 0 as true, or 1, and MET values below 0 as false, or 0.
yts_class = yts > 0.5
pred_class = pred > 0.5
# yts_class = np.zeros(len(yts))
# pred_class = np.zeros(len(pred))
# divs = [-1,-0.75,-0.5,-0.25,0,0.25,0.5,0.75,1]
# for i in range(len(divs)-1):
# yts_class += i * ((divs[i] < yts) * (yts < divs[i+1]))
# pred_class += i * ((divs[i] < pred) * (pred < divs[i+1]))
print('Accuracy:', accuracy_score(yts_class, pred_class))
print('Precision:', precision_score(yts_class, pred_class))
print('Recall:', recall_score(yts_class, pred_class))
from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix
import matplotlib.cm as cm
# Plotting confusion matrix
C = confusion_matrix(yts_class, pred_class, labels = [0,1], normalize = None)
disp = ConfusionMatrixDisplay(confusion_matrix = C, display_labels = ['0','1'])
disp.plot(values_format = 'd', cmap = cm.Oranges)
plt.show()
# Compute micro-average ROC curve and ROC area
import matplotlib
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 12}
matplotlib.rc('font', **font)
fpr, tpr, _ = roc_curve(yts_class, pred_class)
roc_auc = auc(fpr, tpr)
plt.figure(figsize = (10,7))
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (Area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([-0.02, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Empatica Data
###Code
# Load the empatica data, note there is no ground truth or response variables here, we are just trying to see qualitatively how the model does in predicting on our own data
empaticadata = pd.read_csv('groupdata.csv')
acts = empaticadata[['activity']]
empaticadata = empaticadata[['HR', 'accx', 'accy', 'accz', 'GSR', 'ST']]
# Convert to numpy array
empaticadata = empaticadata.to_numpy()
# Predict with the model
pred = mdl.predict(empaticadata)
plt.plot(pred)
plt.xlabel('Instance'); plt.ylabel('Normalized MET'); plt.title('Normalized MET Prediction for Empatica Data')
plt.show()
###Output
/Users/danemerson/opt/anaconda3/lib/python3.8/site-packages/xgboost/data.py:112: UserWarning: Use subset (sliced data) of np.ndarray is not recommended because it will generate extra copies and increase memory consumption
warnings.warn(
|
notebooks/00_prepare_transaction_data.ipynb | ###Markdown
Preparing transaction data for the customers in the scope > API details.
###Code
#hide
from nbdev.showdoc import *
#export
import os
import pandas as pd
from sample_project import config
from sample_project.helper import write_to_csv, read_from_csv
from fastcore.utils import store_attr
import numpy as np
#hide
import warnings
warnings.filterwarnings("ignore")
#export
class Transactional_Data:
'''
This class is built to create main dataset to be used in this project. Following steps below, the transaction dataset for
clients in the scope is created:
1. Merging three different datasets: "transaction","customer info" and "disp info" which shows customer and account matches
2. Applying three different filters below to cover only customers in the scope:
- Consider only transactions whose date in between <start date> and <end date>
- Consider only customers who have loans with more than <loan_amnt_thrsh> euros
- Consider only customers from districts where there are more than <district_cnt_thrsh> customers
Args:
trnx_dataset (Pandas DataFrame): The csv file path which has transaction dataset with at least these fields: "account_id","date"
disp_dataset (Pandas DataFrame): The csv file path which disp dataset with at least these fields: "client_id","account_id"
client_dataset (Pandas DataFrame): The csv file path which customer info dataset with at least these fields: "client_id","district_id"
loan_dataset (Pandas DataFrame): The csv file path which loan dataset with at least these fields: "client_id","amount"
loan_amnt_thrsh (integer): Loan amount threshold to be use to apply filter 2
district_cnt_thrsh (integer): District count threshold to be used to apply filter 3
start_date (integer): Start date threshold for transaction dataset to apply filter 1
end_date (integer): End date threshold for transaction dataset to apply filter 1
to_csv (boolean): If the returned dataframe is desired to be written into csv file
Return:
main_data (pandas DataFrame): the transaction dataset for clients in the scope
'''
def __init__(self,trnx_dataset=config.CSV_TRANSACTION,
disp_dataset= config.CSV_DISP_INFO,
client_dataset=config.CSV_CUST_INFO,
loan_dataset=config.CSV_LOAN,
loan_amnt_thrsh=1000,
district_cnt_thrsh = 110,
start_date=900000,
end_date=970000):
store_attr()
def create_data(self, apply_filters = True, to_csv=True):
df_trnx = read_from_csv(self.trnx_dataset)
self.df_disp = read_from_csv(self.disp_dataset)
self.df_client = read_from_csv(self.client_dataset)
self.df_loan = read_from_csv(self.loan_dataset)
merged_data = (df_trnx.drop(["k_symbol","bank","account"],axis=1)
.merge(self.df_disp[["client_id","account_id"]], on="account_id",how="left")
.merge(self.df_client[["client_id","district_id"]],on="client_id",how="left")
)
if apply_filters:
merged_data = self.applying_date_filter(merged_data)
merged_data = self.applying_loan_amount_filter(merged_data)
merged_data = self.applying_district_filter(merged_data)
if to_csv:
write_to_csv(df= merged_data, path = config.CSV_CUSTOMIZED_TRNX )
return merged_data
def applying_date_filter(self,df):
return df[(df["date"] >= self.start_date)&(df["date"] <= self.end_date)]
def applying_loan_amount_filter(self,df):
df = df.merge(
(self.df_loan
.merge(self.df_disp[["client_id","account_id"]],on="account_id",how="left")
.groupby("client_id",as_index=False).amount.sum().rename(columns={'amount':'Total_Loan_Amount'})
), on ="client_id", how="left"
)
df["Total_Loan_Amount"] = df["Total_Loan_Amount"].fillna(0)
return df[df["Total_Loan_Amount"] > self.loan_amnt_thrsh]
def applying_district_filter(self,df):
district_count = self.df_client.groupby("district_id",as_index=False).client_id.count().rename(columns={'client_id':'Num_Cust_in_District'})
district_list = district_count[district_count["Num_Cust_in_District"]>self.district_cnt_thrsh]["district_id"].tolist()
return df[df["district_id"].isin(district_list)]
###Output
_____no_output_____
###Markdown
Create data automatically
###Code
#hide
tranx= Transactional_Data(loan_amnt_thrsh=0, district_cnt_thrsh = 0)
tranx_data = tranx.create_data(apply_filters=True,to_csv=True)
#hide
tranx_data
###Output
_____no_output_____
###Markdown
Check the steps
###Code
#hide
tranx= Transactional_Data(loan_amnt_thrsh=0, district_cnt_thrsh = 0)
merged_data = tranx.create_data(apply_filters=False,to_csv=False)
merged_data
###Output
_____no_output_____
###Markdown
Date filter
###Code
#hide
date_filtered = tranx.applying_date_filter(merged_data)
date_filtered
#hide
print("number_of_removed_rows:", len(merged_data) - len(date_filtered))
###Output
number_of_removed_rows: 723577
###Markdown
Loan Amount Filter
###Code
#hide
loan_amount_filtered = tranx.applying_loan_amount_filter(merged_data)
loan_amount_filtered
#hide
print("number_of_removed_rows:", len(merged_data) - len(loan_amount_filtered))
###Output
number_of_removed_rows: 1028998
###Markdown
District Filter
###Code
#hide
district_filtered = tranx.applying_district_filter(merged_data)
district_filtered
#hide
print("number_of_removed_rows:", len(merged_data) - len(district_filtered))
###Output
number_of_removed_rows: 0
|
gwas_visualization/gwas_results_R.ipynb | ###Markdown
Explore and annotate GWAS resultsThis notebook is delivered "As-Is". Notwithstanding anything to the contrary, DNAnexus will have no warranty, support, liability or other obligations with respect to Materials provided hereunder.[MIT License](https://github.com/dnanexus/UKB_RAP/blob/main/LICENSE) applies to this notebook. Merging our multiple regenie filesWe have provided a short shell script (`process_regenie_results.sh`) that will merge regenie results from a multiple chromosomes into a single file. Depending on how your naming conventions, you may have to adjust the wildcard expression used for your file.We will proceed assuming that you have this merged file. Working with flat files on DNAnexus in RTo work with R, files from the project on DNAnexus should be either read as a data.frame in R with pipe("dx cat ") functionality supported within R, or downloaded locally with `dx download` in local instance via the terminal or additional notebooks.Let's download the regenie results file to JupyterLab
###Code
system("dx download -f gwas_results/multiple_assoc_edit_tab.all.regenie")
# View first few rows:
system('head -3 multiple_assoc_edit_tab.all.regenie', intern = T)
# Lets remove "#" from the first row to read in the header row correctly in R
system('sed -i -e "1 s/\\#//" multiple_assoc_edit_tab.all.regenie', intern = T)
###Output
_____no_output_____
###Markdown
Install Needed PackagesThese are the following packages that are required for this JupyterLab notebook. They are not installed by default; note that you will need to decide if the licenses are appropriate for your application.There can be some errors when installing these packages from an R code cell. We recommend that you open a terminal using the JupyterLab launcher, launch R on the command line and then cut and paste the code cell. Another option is to use specified version when installing libraries.
###Code
install.packages("rlang", version = '1.0.1')
install.packages("qqman")
install.packages("tidyr")
install.packages("dplyr")
install.packages("ggplot2")
install.packages("manhattanly")
###Output
_____no_output_____
###Markdown
Loading the Required PackagesWe'll do a little bit of data wrangling using `{tidyr}` and `{dplyr}`. Make sure that you've loaded the correct snapshot for this.`{manhattanly}` will let us produce an interactive plot using `{plotly}`. The nice thing about this package is that it will produce an interactive plot that can be shared in a Jupyter notebook.
###Code
# load packages
library(rlang)
library(qqman, quietly = TRUE)
library(repr, quietly = TRUE)
library(tidyr, quietly = TRUE)
library(dplyr, quietly = TRUE)
library(ggplot2)
library(manhattanly)
###Output
_____no_output_____
###Markdown
Reading in GWAS Result File from Jupyter StorageWe'll take our GWAS result file that we downloaded it and read it in using the `read.table()` function.
###Code
gwas = read.table("multiple_assoc_edit_tab.all.regenie", header = T, as.is = T, sep = '\t')
# Look at the head of the gwas dataframe
head(gwas)
###Output
_____no_output_____
###Markdown
Adding `P` column by inversing negative logarithm with the base 10.
###Code
gwas <-
gwas %>% mutate(P = (10^(-LOG10P)))
head(gwas)
###Output
_____no_output_____
###Markdown
Regenie output may contain multiple rows for each variant for all predictor's in the model specified with 'TEST' column. Let's filter results to look at the additive effects per variant.
###Code
# Subset dataframe
gwas_additive <-
gwas %>%
filter(TEST == "ADD") %>%
tidyr::drop_na(LOG10P)
# Dimensions of the dataframe
dim(gwas_additive)
head(gwas_additive)
###Output
_____no_output_____
###Markdown
What is the lowest P-value in our set of variants?
###Code
# Lowest P-value
min(gwas_additive$P)
###Output
_____no_output_____
###Markdown
Generating a Q-Q plotWe can generate a Q/Q plot to check our p-value distribution.
###Code
# Generate QQ plot with the GWAS results
qq(gwas_additive$P, main = "Q-Q plot of case-control GWAS p-values")
###Output
_____no_output_____
###Markdown
Plotting a Manhattan PlotWe can use the `manhattan()` function from the `{qqman}` package to generate a manhattan plot.Let's first define a couple of color palettes for distinguishing the different chromosomes in our Manhattan plot.
###Code
# Adjust plot size
options(repr.plot.width=12, repr.plot.height=8)
# Select Manhattan plot color palette
# w = warmer tones
# n = neutral
# c = cooler tones
# Reds
reds.w <- c("#FFAD7E", "#E9874F", "#D96726", "#AE4A12", "#873100")
reds.n <- c("#FF817E", "#E9534F", "#D92B26", "#AE1612", "#870300")
reds.c <- c("#E2709A", "#CB4577", "#BD215B", "#970F42", "#75002B")
# Make the Manhattan plot on the gwas results dataframe
#Use reds.c as our color palette
manhattan(gwas_additive, chr="CHROM",
bp="GENPOS", snp="ID", p="P", ylim=c(0,10), suggestiveline=FALSE,
col=reds.c,main="Manhattan Plot for case control GWAS")
###Output
_____no_output_____
###Markdown
We can zoom into Chromosomes 1 by using a `filter()` operation:
###Code
gwas_additive_12 <-
gwas_additive %>%
filter(CHROM %in% c("1"))
manhattan(gwas_additive_12, chr="CHROM",
bp="GENPOS", snp="ID", p="P", ylim=c(0,10), suggestiveline=FALSE,
col=reds.w,main="Manhattan Plot for case control GWAS")
###Output
_____no_output_____
###Markdown
Interactive Manahattan Plot with the `{manhattanly}` packageThe `{manhattanly}` package uses `plotly` under the hood to make an interactive manhattan plot.We can control the tooltip by utilizing the `annotation1` and `annotation2` arguments and using column names.
###Code
# By default, the `manhattanly` function assumes columns are named CHR, BP and P.
# These can be specified by the user if they are different, like below:
library(manhattanly)
subset_gwas <- gwas_additive %>%
filter(CHROM %in% c(1:2))
manhattanly(subset_gwas, chr = "CHROM", bp = "GENPOS",
snp = "ID", annotation1 = "CHISQ", suggestiveline = FALSE,
annotation2 = "BETA", p = "P")
qqly(
subset(gwas, CHROM %in% 1:2), chr = "CHROM", bp = "GENPOS", snp = "ID",
annotation1 = "CHISQ", annotation2 = "BETA"
)
###Output
_____no_output_____
###Markdown
Filtering our Candidate Variant List
###Code
# Subset results showing suggestive association
gwas_top <- gwas %>%
filter(P < 0.001) %>%
arrange(P)
dim(gwas_top)
head(gwas_top)
###Output
_____no_output_____
###Markdown
Annotating GWAS results with clinVar Downloading ClinVar Annotation FilesWe will use a tab-delimited report based on each variant at a location on the genome for which data have been submitted to ClinVar.1. `wget` `variant_summary.txt.gz` file and unzip it2. Load variant_summary table3. Subset variant_summary to only include SNPs4. Merge with `gwas_top` using Chromosome and Position5. Select relevant columns in merged table
###Code
system("wget https://ftp.ncbi.nlm.nih.gov/pub/clinvar/tab_delimited/variant_summary.txt.gz")
system("gunzip variant_summary.txt.gz")
clinvar <- read.delim("variant_summary.txt", sep="\t")
colnames(clinvar)
###Output
_____no_output_____
###Markdown
First we need to filter `clinvar` to only contain SNPs. We do that by `filter()` by `Type == "single nucleotide variant"`.
###Code
clinvar <- clinvar %>%
filter(Type == "single nucleotide variant") %>%
mutate(Chromosome = as.character(Chromosome))
###Output
_____no_output_____
###Markdown
Here we merge our `gwas_top` file with `clinvar` using `dplyr::inner_join()` on both the `CHROM` and `GENEPOS` columns in our data.
###Code
gwas_top_annotated <- gwas_top %>%
mutate(CHROM = as.character(CHROM)) %>%
inner_join(y=clinvar, by=c("CHROM"="Chromosome", "GENPOS"="Start")) %>%
mutate(CHROM = as.numeric(CHROM))
colnames(gwas_top_annotated)
###Output
_____no_output_____
###Markdown
Now we have our tables merged, we can pass the `clinicalsignificance` column to the `annotation1` argument and `BETA` to the `annotation2` argument in `manhattanly()`, to further understand our candidates.
###Code
manhattanly(gwas_top_annotated, chr = "CHROM", bp = "GENPOS",
snp = "ID", suggestiveline = FALSE, annotation1 = "ClinicalSignificance",
annotation2 = "BETA")
###Output
_____no_output_____
###Markdown
Saving our annotated resultsFinally, we'll use the `write.csv()` function to write a csv file and then use `dx upload` to get this result back onto the platform.
###Code
write.csv(gwas_top_annotated, "clinvar_annotated_candidates.csv")
system("dx upload clinvar_annotated_candidates.csv --path gwas_results/")
###Output
_____no_output_____ |
ML/tf/eager.ipynb | ###Markdown
TF Eager[ref](https://blog.csdn.net/wizardforcel/article/details/81211571)
###Code
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
if tf.version.VERSION < '2.0.0':
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
# eager is enabled default in tf 2.0
#import tensorflow.compat.v1 as tf1
#tf1.disable_eager_execution()
###Output
_____no_output_____
###Markdown
create a model using keras API
###Code
from tensorflow.keras import Model
from tensorflow.layers import Dense
class LR(Model):
def __init__(self):
super().__init__()
self.hidden = Dense(10, activation=tf.nn.relu)
self.output_layer = Dense(2, activation=None)
def call(self, x):
x = self.hidden(x)
x = self.output_layer(x)
return x
def loss(self, inputs, target):
logits = self.call(inputs)
loss = tf.losses.sparse_softmax_cross_entropy(labels=target, logits=logits)
return loss
###Output
_____no_output_____
###Markdown
dummy data
###Code
from sklearn.datasets import make_moons
x, y = make_moons(n_samples=100, noise=0.1, random_state=2018)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(x[:,0], x[:,1], c=y, cmap=plt.cm.autumn)
plt.xlabel('First feature')
plt.ylabel('Second feature')
plt.title('Toy classification problem')
plt.show()
###Output
_____no_output_____
###Markdown
train
###Code
num_epochs = 10
inputs = tf.constant(x)
target = tf.constant(y)
model = LR()
optimizer = tf.train.GradientDescentOptimizer(5e-1)
for epoch in range(num_epochs):
with tfe.GradientTape() as tape:
loss = model.loss(inputs, target)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
print('Epoch {} Loss {:.4f}'.format(epoch, loss.numpy()))
###Output
Epoch 0 Loss 0.6686
Epoch 1 Loss 0.5993
Epoch 2 Loss 0.5485
Epoch 3 Loss 0.5085
Epoch 4 Loss 0.4761
Epoch 5 Loss 0.4500
Epoch 6 Loss 0.4297
Epoch 7 Loss 0.4136
Epoch 8 Loss 0.4001
Epoch 9 Loss 0.3886
|
Applied Text Mining/Week2 - Basic Natural Language Processing/Assignment+2.ipynb | ###Markdown
---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 2 - Introduction to NLTKIn part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling. Part 1 - Analyzing Moby Dick
###Code
import nltk
import pandas as pd
import numpy as np
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
###Output
_____no_output_____
###Markdown
Example 1How many tokens (words and punctuation symbols) are in text1?*This function should return an integer.*
###Code
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
example_one()
###Output
_____no_output_____
###Markdown
Example 2How many unique tokens (unique words and punctuation) does text1 have?*This function should return an integer.*
###Code
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
example_two()
###Output
_____no_output_____
###Markdown
Example 3After lemmatizing the verbs, how many unique tokens does text1 have?*This function should return an integer.*
###Code
from nltk.stem import WordNetLemmatizer
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
example_three()
###Output
_____no_output_____
###Markdown
Question 1What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)*This function should return a float.*
###Code
def answer_one():
return len(set(moby_tokens)) / len(moby_tokens)
answer_one()
###Output
_____no_output_____
###Markdown
Question 2What percentage of tokens is 'whale'or 'Whale'?*This function should return a float.*
###Code
def answer_two():
return 100 * (moby_tokens.count('whale') + moby_tokens.count('Whale')) / len(moby_tokens)
answer_two()
###Output
_____no_output_____
###Markdown
Question 3What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?*This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.*
###Code
def answer_three():
from nltk.probability import FreqDist
import operator
dict_freq = FreqDist(moby_tokens)
sorted_list = sorted(dict_freq.items(), key=operator.itemgetter(1), reverse=True)
return sorted_list[:20]
answer_three()
###Output
_____no_output_____
###Markdown
Question 4What tokens have a length of greater than 5 and frequency of more than 150?*This function should return a sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`*
###Code
def answer_four():
from nltk.probability import FreqDist
import operator
dict_freq = FreqDist(moby_tokens)
result = [k for k, v in dict_freq.items() if len(k) > 5 and v > 150]
result.sort()
return result
answer_four()
###Output
_____no_output_____
###Markdown
Question 5Find the longest word in text1 and that word's length.*This function should return a tuple `(longest_word, length)`.*
###Code
def answer_five():
result = sorted(map(lambda x: (x, len(x)), set(moby_tokens)), key = (lambda x: x[1]), reverse=True)
return result[0]
answer_five()
###Output
_____no_output_____
###Markdown
Question 6What unique words have a frequency of more than 2000? What is their frequency?"Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation."*This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.*
###Code
def answer_six():
from nltk.probability import FreqDist
import operator
dict_freq = FreqDist(moby_tokens)
sorted_list = sorted(dict_freq.items(), key=operator.itemgetter(1), reverse=True)
freq_2000 = [(v, k) for k, v in sorted_list if k.isalpha() and v > 2000]
return freq_2000
answer_six()
###Output
_____no_output_____
###Markdown
Question 7What is the average number of tokens per sentence?*This function should return a float.*
###Code
def answer_seven():
import statistics
sentences = nltk.sent_tokenize(moby_raw)
result = statistics.mean(map(lambda x: len(nltk.word_tokenize(x)), sentences))
return result
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8What are the 5 most frequent parts of speech in this text? What is their frequency?*This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.*
###Code
def answer_eight():
from nltk.probability import FreqDist
pos_tagscount = FreqDist([x[1] for x in nltk.pos_tag(moby_tokens)])
import operator
sorted_list = sorted(pos_tagscount.items(), key=operator.itemgetter(1), reverse=True)
return sorted_list[:5]
answer_eight()
###Output
_____no_output_____
###Markdown
Part 2 - Spelling RecommenderFor this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.*Each of the three different recommenders will use a different distance measure (outlined below).Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`.
###Code
from nltk.corpus import words
correct_spellings = words.words()
###Output
_____no_output_____
###Markdown
Question 9For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
result = []
for misspell in entries:
candidates = [w for w in correct_spellings if w[0] == misspell[0]]
correct_spell = min(candidates, key=(lambda candidate:
nltk.jaccard_distance(set(nltk.ngrams(candidate, 3)), set(nltk.ngrams(misspell, 3)))))
result.append(correct_spell)
return result
answer_nine()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel/__main__.py:7: DeprecationWarning: generator 'ngrams' raised StopIteration
###Markdown
Question 10For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
result = []
for misspell in entries:
candidates = [w for w in correct_spellings if w[0] == misspell[0]]
correct_spell = min(candidates, key=(lambda candidate:
nltk.jaccard_distance(set(nltk.ngrams(candidate, 4)), set(nltk.ngrams(misspell, 4)))))
result.append(correct_spell)
return result
answer_ten()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel/__main__.py:7: DeprecationWarning: generator 'ngrams' raised StopIteration
###Markdown
Question 11For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
result = []
for misspell in entries:
candidates = [w for w in correct_spellings if w[0] == misspell[0]]
correct_spell = min(candidates, key=(lambda candidate: nltk.edit_distance(candidate, misspell)))
result.append(correct_spell)
return result
answer_eleven()
###Output
_____no_output_____ |
00_Miscellaneous/tf_transform/tft-02 - Babyweight Estimation with Transformed Data.ipynb | ###Markdown
Babyweight Estimation with Transformed Data Set global flags
###Code
PROJECT = 'ksalama-gcp-playground' # change to your project_Id
BUCKET = 'ksalama-gcs-cloudml' # change to your bucket name
REGION = 'europe-west1' # change to your region
ROOT_DIR = 'babyweight_tft' # directory where the output is stored locally or on GCS
RUN_LOCAL = True
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['ROOT_DIR'] = ROOT_DIR
os.environ['RUN_LOCAL'] = 'true' if RUN_LOCAL else 'false'
###Output
_____no_output_____
###Markdown
Import required packages and modules
###Code
import os
import tensorflow as tf
from tensorflow import data
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
!pip list | grep 'tensorflow'
!pip list | grep 'beam'
!pip list | grep 'cloud-dataflow'
OUTPUT_DIR = ROOT_DIR if RUN_LOCAL==True else "gs://{}/{}".format(BUCKET,ROOT_DIR)
TRANSFORM_ARTEFACTS_DIR = os.path.join(OUTPUT_DIR,'transform')
TRANSFORMED_DATA_DIR = os.path.join(OUTPUT_DIR,'transformed')
TEMP_DIR = os.path.join(OUTPUT_DIR, 'tmp')
MODELS_DIR = os.path.join(OUTPUT_DIR,'models')
###Output
_____no_output_____
###Markdown
Transform Metadata
###Code
transformed_metadata = metadata_io.read_metadata(
os.path.join(TRANSFORM_ARTEFACTS_DIR,"transformed_metadata"))
TARGET_FEATURE_NAME = 'weight_pounds'
print transformed_metadata.schema
###Output
_____no_output_____
###Markdown
Input Function
###Code
def tfrecords_input_fn(files_name_pattern, transformed_metadata,
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=500):
dataset = tf.contrib.data.make_batched_features_dataset(
file_pattern=files_name_pattern,
batch_size=batch_size,
features=transformed_metadata.schema.as_feature_spec(),
reader=tf.data.TFRecordDataset,
num_epochs=num_epochs,
shuffle=True if mode == tf.estimator.ModeKeys.TRAIN else False,
shuffle_buffer_size=1+(batch_size*2),
prefetch_buffer_size=1
)
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()
target = features.pop(TARGET_FEATURE_NAME)
return features, target
###Output
_____no_output_____
###Markdown
Feature columns
###Code
def create_wide_and_deep_feature_columns(transformed_metadata, hparams):
deep_feature_columns = []
wide_feature_columns = []
column_schemas = transformed_metadata.schema.column_schemas
for feature_name in column_schemas:
if feature_name == TARGET_FEATURE_NAME:
continue
column_schema = column_schemas[feature_name]
# creating numerical features
if isinstance(column_schema._domain, dataset_schema.FloatDomain):
deep_feature_columns.append(tf.feature_column.numeric_column(feature_name))
# creating categorical features with identity
elif isinstance(column_schema._domain, dataset_schema.IntDomain):
if column_schema._domain._is_categorical==True:
wide_feature_columns.append(
tf.feature_column.categorical_column_with_identity(
feature_name,
num_buckets=column_schema._domain._max_value+1)
)
else:
deep_feature_columns.append(tf.feature_column.numeric_column(feature_name))
if hparams.extend_feature_columns==True:
mother_race_X_mother_age_bucketized = tf.feature_column.crossed_column(
['mother_age_bucketized', 'mother_race_index'], 55)
wide_feature_columns.append(mother_race_X_mother_age_bucketized)
mother_race_X_mother_age_bucketized_embedded = tf.feature_column.embedding_column(
mother_race_X_mother_age_bucketized, hparams.embed_dimension)
deep_feature_columns.append(mother_race_X_mother_age_bucketized_embedded)
print "Wide columns:"
print wide_feature_columns
print ""
print "Deep columns:"
print deep_feature_columns
print ""
return wide_feature_columns, deep_feature_columns
###Output
_____no_output_____
###Markdown
Estimator
###Code
def create_estimator(run_config, hparams):
wide_feature_columns, deep_feature_columns = create_wide_and_deep_feature_columns(transformed_metadata,
hparams)
estimator = tf.estimator.DNNLinearCombinedRegressor(
linear_feature_columns = wide_feature_columns,
dnn_feature_columns = deep_feature_columns,
dnn_hidden_units=hparams.hidden_units,
config = run_config
)
return estimator
###Output
_____no_output_____
###Markdown
Experiment
###Code
hparams = tf.contrib.training.HParams(
num_epochs=10,
batch_size=500,
hidden_units=[32, 16],
max_steps=100,
embed_dimension=5,
extend_feature_columns=False,
evaluate_after_sec=10
)
model_dir = os.path.join(MODELS_DIR,"dnn_estimator")
run_config = tf.estimator.RunConfig(
tf_random_seed=19830610,
model_dir=model_dir
)
train_data_files = os.path.join(TRANSFORMED_DATA_DIR, "train-*.tfrecords")
eval_data_files = os.path.join(TRANSFORMED_DATA_DIR, "eval-*.tfrecords")
# TrainSpec
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: tfrecords_input_fn(train_data_files,transformed_metadata,
mode=tf.estimator.ModeKeys.TRAIN,
num_epochs= hparams.num_epochs,
batch_size = hparams.batch_size
),
max_steps=hparams.max_steps,
)
# EvalSpec
eval_spec = tf.estimator.EvalSpec(
input_fn =lambda: tfrecords_input_fn(eval_data_files,transformed_metadata),
steps = None,
throttle_secs = hparams.evaluate_after_sec # evalute after each 10 training seconds!
)
from datetime import datetime
if tf.gfile.Exists(model_dir):
tf.gfile.DeleteRecursively(model_dir)
estimator = create_estimator(run_config, hparams)
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("")
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
tf.estimator.train_and_evaluate(
estimator,
train_spec,
eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
###Output
_____no_output_____
###Markdown
Raw data metadata
###Code
CATEGORICAL_FEATURE_NAMES = ['is_male', 'mother_race']
NUMERIC_FEATURE_NAMES = ['mother_age', 'plurality', 'gestation_weeks']
TARGET_FEATURE_NAME = 'weight_pounds'
KEY_COLUMN = 'key'
def create_raw_metadata():
raw_data_schema = {}
# key feature scehma
raw_data_schema[KEY_COLUMN]= dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
# target feature scehma
raw_data_schema[TARGET_FEATURE_NAME]= dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
# categorical features scehma
raw_data_schema.update({ column_name : dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation())
for column_name in CATEGORICAL_FEATURE_NAMES})
# numerical features scehma
raw_data_schema.update({ column_name : dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
for column_name in NUMERIC_FEATURE_NAMES})
# create dataset_metadata given raw_schema
raw_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema))
return raw_metadata
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(create_raw_metadata().schema.as_feature_spec())
###Output
_____no_output_____
###Markdown
Export Estimator to SavedModel
###Code
def serving_input_receiver_fn():
from tensorflow_transform.saved import saved_transform_io
# get the feature_spec of raw data
raw_metadata = create_raw_metadata()
# create receiver placeholders to the raw input features
raw_input_features = raw_metadata.schema.as_batched_placeholders()
raw_input_features.pop(TARGET_FEATURE_NAME)
raw_input_features.pop(KEY_COLUMN)
# apply tranform_fn on raw features
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(TRANSFORM_ARTEFACTS_DIR,transform_fn_io.TRANSFORM_FN_DIR),
raw_input_features)
)
return tf.estimator.export.ServingInputReceiver(
transformed_features, raw_input_features)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=serving_input_receiver_fn
)
os.environ['export_dir'] = export_dir
###Output
_____no_output_____
###Markdown
Inspect the Exported Model
###Code
%%bash
if [ ${RUN_LOCAL} ]
then
saved_model_dir=$(gsutil ls ${export_dir} | tail -n 1)
echo $saved_model_dir
else
saved_model_dir=${export_dir}/$(ls ${export_dir} | tail -n 1)
echo ${saved_model_dir}
fi
saved_model_cli show --dir=${saved_model_dir} --all
###Output
_____no_output_____
###Markdown
Use Exported Model for Prediction
###Code
saved_model_dir=os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[0])
print saved_model_dir
def estimate_local(instance):
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir=saved_model_dir,
signature_def_key="predict"
)
instance = dict((k, [v]) for k, v in instance.items())
value = predictor_fn(instance)['predictions'][0][0]
return value
instance = {
'is_male': 'True',
'mother_age': 26.0,
'mother_race': 'Asian Indian',
'plurality': 1.0,
'gestation_weeks': 39
}
prediction = estimate_local(instance)
print(prediction)
###Output
_____no_output_____
###Markdown
Babyweight Estimation with Transformed Data Set global flags
###Code
PROJECT = 'ksalama-gcp-playground' # change to your project_Id
BUCKET = 'ksalama-gcs-cloudml' # change to your bucket name
REGION = 'europe-west1' # change to your region
ROOT_DIR = 'babyweight_tft' # directory where the output is stored locally or on GCS
RUN_LOCAL = True
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['ROOT_DIR'] = ROOT_DIR
os.environ['RUN_LOCAL'] = 'true' if RUN_LOCAL else 'false'
###Output
_____no_output_____
###Markdown
Import required packages and modules
###Code
import os
import tensorflow as tf
from tensorflow import data
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
!pip list | grep 'tensorflow'
!pip list | grep 'beam'
!pip list | grep 'cloud-dataflow'
OUTPUT_DIR = ROOT_DIR if RUN_LOCAL==True else "gs://{}/{}".format(BUCKET,ROOT_DIR)
TRANSFORM_ARTEFACTS_DIR = os.path.join(OUTPUT_DIR,'transform')
TRANSFORMED_DATA_DIR = os.path.join(OUTPUT_DIR,'transformed')
TEMP_DIR = os.path.join(OUTPUT_DIR, 'tmp')
MODELS_DIR = os.path.join(OUTPUT_DIR,'models')
###Output
_____no_output_____
###Markdown
Transform Metadata
###Code
transformed_metadata = metadata_io.read_metadata(
os.path.join(TRANSFORM_ARTEFACTS_DIR,"transformed_metadata"))
TARGET_FEATURE_NAME = 'weight_pounds'
print transformed_metadata.schema
###Output
_____no_output_____
###Markdown
Input Function
###Code
def tfrecords_input_fn(files_name_pattern, transformed_metadata,
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=500):
dataset = tf.contrib.data.make_batched_features_dataset(
file_pattern=files_name_pattern,
batch_size=batch_size,
features=transformed_metadata.schema.as_feature_spec(),
reader=tf.data.TFRecordDataset,
num_epochs=num_epochs,
shuffle=True if mode == tf.estimator.ModeKeys.TRAIN else False,
shuffle_buffer_size=1+(batch_size*2),
prefetch_buffer_size=1
)
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()
target = features.pop(TARGET_FEATURE_NAME)
return features, target
###Output
_____no_output_____
###Markdown
Feature columns
###Code
def create_wide_and_deep_feature_columns(transformed_metadata, hparams):
deep_feature_columns = []
wide_feature_columns = []
column_schemas = transformed_metadata.schema.column_schemas
for feature_name in column_schemas:
if feature_name == TARGET_FEATURE_NAME:
continue
column_schema = column_schemas[feature_name]
# creating numerical features
if isinstance(column_schema._domain, dataset_schema.FloatDomain):
deep_feature_columns.append(tf.feature_column.numeric_column(feature_name))
# creating categorical features with identity
elif isinstance(column_schema._domain, dataset_schema.IntDomain):
if column_schema._domain._is_categorical==True:
wide_feature_columns.append(
tf.feature_column.categorical_column_with_identity(
feature_name,
num_buckets=column_schema._domain._max_value+1)
)
else:
deep_feature_columns.append(tf.feature_column.numeric_column(feature_name))
if hparams.extend_feature_columns==True:
mother_race_X_mother_age_bucketized = tf.feature_column.crossed_column(
['mother_age_bucketized', 'mother_race_index'], 55)
wide_feature_columns.append(mother_race_X_mother_age_bucketized)
mother_race_X_mother_age_bucketized_embedded = tf.feature_column.embedding_column(
mother_race_X_mother_age_bucketized, hparams.embed_dimension)
deep_feature_columns.append(mother_race_X_mother_age_bucketized_embedded)
print "Wide columns:"
print wide_feature_columns
print ""
print "Deep columns:"
print deep_feature_columns
print ""
return wide_feature_columns, deep_feature_columns
###Output
_____no_output_____
###Markdown
Estimator
###Code
def create_estimator(run_config, hparams):
wide_feature_columns, deep_feature_columns = create_wide_and_deep_feature_columns(transformed_metadata,
hparams)
estimator = tf.estimator.DNNLinearCombinedRegressor(
linear_feature_columns = wide_feature_columns,
dnn_feature_columns = deep_feature_columns,
dnn_hidden_units=hparams.hidden_units,
config = run_config
)
return estimator
###Output
_____no_output_____
###Markdown
Experiment
###Code
hparams = tf.contrib.training.HParams(
num_epochs=10,
batch_size=500,
hidden_units=[32, 16],
max_steps=100,
embed_dimension=5,
extend_feature_columns=False,
evaluate_after_sec=10
)
model_dir = os.path.join(MODELS_DIR,"dnn_estimator")
run_config = tf.estimator.RunConfig(
tf_random_seed=19830610,
model_dir=model_dir
)
train_data_files = os.path.join(TRANSFORMED_DATA_DIR, "train-*.tfrecords")
eval_data_files = os.path.join(TRANSFORMED_DATA_DIR, "eval-*.tfrecords")
# TrainSpec
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: tfrecords_input_fn(train_data_files,transformed_metadata,
mode=tf.estimator.ModeKeys.TRAIN,
num_epochs= hparams.num_epochs,
batch_size = hparams.batch_size
),
max_steps=hparams.max_steps,
)
# EvalSpec
eval_spec = tf.estimator.EvalSpec(
input_fn =lambda: tfrecords_input_fn(eval_data_files,transformed_metadata),
steps = None,
throttle_secs = hparams.evaluate_after_sec # evalute after each 10 training seconds!
)
from datetime import datetime
if tf.gfile.Exists(model_dir):
tf.gfile.DeleteRecursively(model_dir)
estimator = create_estimator(run_config, hparams)
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("")
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
tf.estimator.train_and_evaluate(
estimator,
train_spec,
eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
###Output
_____no_output_____
###Markdown
Raw data metadata
###Code
CATEGORICAL_FEATURE_NAMES = ['is_male', 'mother_race']
NUMERIC_FEATURE_NAMES = ['mother_age', 'plurality', 'gestation_weeks']
TARGET_FEATURE_NAME = 'weight_pounds'
KEY_COLUMN = 'key'
def create_raw_metadata():
raw_data_schema = {}
# key feature scehma
raw_data_schema[KEY_COLUMN]= dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
# target feature scehma
raw_data_schema[TARGET_FEATURE_NAME]= dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
# categorical features scehma
raw_data_schema.update({ column_name : dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation())
for column_name in CATEGORICAL_FEATURE_NAMES})
# numerical features scehma
raw_data_schema.update({ column_name : dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
for column_name in NUMERIC_FEATURE_NAMES})
# create dataset_metadata given raw_schema
raw_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema))
return raw_metadata
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(create_raw_metadata().schema.as_feature_spec())
###Output
_____no_output_____
###Markdown
Export Estimator to SavedModel
###Code
def serving_input_receiver_fn():
from tensorflow_transform.saved import saved_transform_io
# get the feature_spec of raw data
raw_metadata = create_raw_metadata()
# create receiver placeholders to the raw input features
raw_input_features = raw_metadata.schema.as_batched_placeholders()
raw_input_features.pop(TARGET_FEATURE_NAME)
raw_input_features.pop(KEY_COLUMN)
# apply tranform_fn on raw features
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(TRANSFORM_ARTEFACTS_DIR,transform_fn_io.TRANSFORM_FN_DIR),
raw_input_features)
)
return tf.estimator.export.ServingInputReceiver(
transformed_features, raw_input_features)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=serving_input_receiver_fn
)
os.environ['export_dir'] = export_dir
###Output
_____no_output_____
###Markdown
Inspect the Exported Model
###Code
%%bash
if [ ${RUN_LOCAL} ]
then
saved_model_dir=$(gsutil ls ${export_dir} | tail -n 1)
else
saved_model_dir=${export_dir}/$(ls ${export_dir} | tail -n 1)
fi
echo $saved_model_dir
saved_model_cli show --dir=${saved_model_dir} --all
###Output
_____no_output_____
###Markdown
Use Exported Model for Prediction
###Code
saved_model_dir=os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[0])
print saved_model_dir
def estimate_local(instance):
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir=saved_model_dir,
signature_def_key="predict"
)
instance = dict((k, [v]) for k, v in instance.items())
value = predictor_fn(instance)['predictions'][0][0]
return value
instance = {
'is_male': 'True',
'mother_age': 26.0,
'mother_race': 'Asian Indian',
'plurality': 1.0,
'gestation_weeks': 39
}
prediction = estimate_local(instance)
print(prediction)
###Output
_____no_output_____ |
Challenges/ASCIIHistogramChallenge.ipynb | ###Markdown
ASCII HistogramThe goal of this coding challenge is to create a histogram using only ASCII characters.The function ascii_histogram() should print out a unique value from the tuple some_numbers in ascending order.Each line will contain a single number space followed by a space and + signs representing the frequency of that number in the some_numbers tuple.```Example:ascii_histogram([0, 3, 0, 3, 0, -1, 0, -11, 20, 20])Output: -11 + 0 ++++ -1 + 3 ++ 20 ++```You will need a precise print format in order to pass the test, which looks at the printed output of the program.Also to pass the test:Each successive number should increase in order.Give your number 3 digits of space (see output above).The number should be followed by 1 blank space.- (+) signs equivalent to the frequency of that number should follow the blank space.The code included is just a guide. Don't change the some_numbers tuple, but feel free to do whatever you need to do to get the proper output as described above.STRETCH GOAL: You blew through this and need something else TODO?Let's make this a FizzBuzZ-ogram!(Copy your code to another python development environment. Leave your working code alone and submit!)1. For numbers that are multiples of 3 switch the '+' to an 'f'2. For numbers that are multiples of 5 switch the '+' to a 'b'3. For numbers that are multiples of 3 and 5 switch the '+' to a 'z'
###Code
# hand histogram function.
def hand_histogram(seq) -> dict:
""" Tally elements form `seq`. """
hstgrm = {}
for i in seq:
hstgrm[i] = hstgrm.get(i, 0) + 1
return hstgrm
# ACII histogram function here.
def ascii_histogram(seq) -> None:
"""A vertical frequency-table/histogram plot."""
counted = hand_histogram(seq)
for k in sorted(counted):
print('{0:3d} {1}'.format(k, '+' * counted[k]))
# set the data.
some_numbers = (-1, -15, 2, 1, 3, 16, -3, 13, 3, 7, 5, 16, -11,
2, 1, 15, 5, -1, -8, 4, -13, 7, 14, 9, 4, -17,
21, -5, 0, 5, -11, -21, -6, 2, -2, -3, 6, 6, 0,
19, -6, 5, 8, 2, -9, -9, 0, 0, 6, 2, 6, 22,
-5, -4, -4, -15, -26, 5, -1, 4, 1, -5, 20, -11, -22,
12, -5, 12, 16, 10, -9, 6, 0, 9, 5, 3, -14, -4,
1, 4, -4, 15, 16, -3, 10, -3, 22, -12, 9, -8, -3,
-9, -2, -26, 7, 18, -9, -1, 7, -2, -23, 12, 10, 1,
-4, -2, 0, 0, 3, 2, 1, 4, 9, 9, 10, 0, -8,
33, -21, -7, 9, 6, 10, 11, -12, -12, -9, -2, 11, -15,
19, 14, -6, -3, 6, 1, 6, 6, 11, 3, 6, 19, -9,
-11, -2, 3, -14, 9, 8, -13, -18, 4, 13, -17, 11, -15,
22, -8, 14, 11, -4, -4, -6, -22, 2, -1, -18, -1, -5,
-9, 4, 6, 14, 5, 2, 7, 13, 18, -6, -6, 0, -4,
2, -7, -12, -4, -3, -13, 5, 22, -13, -10, 2, 3, 2,
25, 8, 7, 5, -19, -9, -20, 11, -3, -6, -8, -8, -6,
9, 7, 12, -10, 5, -1, 13, -11, -11, 6, -12, 2, 8,
5, 17, -5, -7, -12, -14, -4, -24, -8, 4, 3, -1, 10,
-12, 26, 16, -22, -13, 12, 3, -6, -10, -12, -2, 4, -7,
-3, -13, 8, 6, -13, -5, 10, 2, -16, -7, 4, -26, 3,
-5, -1, 8, -9, 12, 1, 9, -9, -25, 2, -2, 14, 21,
-1, -12, -13, 9, 24, 24, -5, -18, -14, -1, 15, -16, -13,
11, 4, 24, -1, 11, -16, -1, -15, -9, 10, -6, -18, 6,
18, 1, -1, -4, -12, -5, 4, -3, 20, 1, 5, 4, -1,
19, 21, 14, 0, 2, -14, 8, 1, 8, -3, 11, -12, -4,
15, 1, 2, 2, 11, -2, -27, 0)
# show the results.
print(ascii_histogram(some_numbers))
###Output
-27 +
-26 +++
-25 +
-24 +
-23 +
-22 +++
-21 ++
-20 +
-19 +
-18 ++++
-17 ++
-16 +++
-15 +++++
-14 +++++
-13 +++++++++
-12 +++++++++++
-11 ++++++
-10 +++
-9 ++++++++++++
-8 +++++++
-7 +++++
-6 ++++++++++
-5 ++++++++++
-4 ++++++++++++
-3 +++++++++++
-2 +++++++++
-1 +++++++++++++++
0 +++++++++++
1 ++++++++++++
2 +++++++++++++++++
3 ++++++++++
4 +++++++++++++
5 ++++++++++++
6 ++++++++++++++
7 +++++++
8 ++++++++
9 ++++++++++
10 ++++++++
11 ++++++++++
12 ++++++
13 ++++
14 ++++++
15 ++++
16 +++++
17 +
18 +++
19 ++++
20 ++
21 +++
22 ++++
24 +++
25 +
26 +
33 +
None
|
demonstrations/opencv_note_01_intro.ipynb | ###Markdown
Loading Images
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('./images/watch.jpg', cv2.IMREAD_GRAYSCALE)
image # получили массив со значениями насыщенности для каждого пиксела
###Output
_____no_output_____
###Markdown
*Чем больше цветов -- тем больше данных, а это значит сложнее обрабатывать*Параметры ```cv2.imread()```:```pythoncv2.IMREAD_COLOR (= 1) Loads a color image. Any transparency of image will be neglected. It is the default flag.cv2.IMREAD_UNCHANGED (= -1) Loads image as such including alpha channel```
###Code
plt.imshow(image, cmap='gray', interpolation='bicubic') # используем matplotlib и загружаем
plt.plot([50, 100], [80, 100], 'c', linewidth=5) # рисуем 1 линюю цвета 'cyan'
plt.show()
###Output
_____no_output_____
###Markdown
Loading Video Source
###Code
import cv2
import numpy as np
capture = cv2.VideoCapture(0);
while True:
returning, frame = capture.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'): # закрываем окошко камеры по нажатию 'q'
break
capture.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
same but with writing video from the camera
###Code
import cv2
import numpy as np
capture = cv2.VideoCapture(0)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('camera_output.avi', fourcc, 20.0, (640, 480))
while True:
returning, frame = capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
cv2.imshow('gray', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture.release()
out.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
6. Reinforcement Learning/2. Thompson Sampling/1_Thompson_Sampling.ipynb | ###Markdown
Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Ads_CTR_Optimisation.csv')
dataset.head()
###Output
_____no_output_____
###Markdown
Implementing Thompson Sampling
###Code
import random
N = 10000
d = 10
ads_selected = []
numbers_of_rewards_1 = [0] * d
numbers_of_rewards_0 = [0] * d
total_reward = 0
for n in range(0, N):
ad = 0
max_random = 0
for i in range(0, d):
random_beta = random.betavariate(numbers_of_rewards_1[i] + 1, numbers_of_rewards_0[i] + 1)
if random_beta > max_random:
max_random = random_beta
ad = i
ads_selected.append(ad)
reward = dataset.values[n, ad]
if reward == 1:
numbers_of_rewards_1[ad] = numbers_of_rewards_1[ad] + 1
else:
numbers_of_rewards_0[ad] = numbers_of_rewards_0[ad] + 1
total_reward = total_reward + reward
###Output
_____no_output_____
###Markdown
Visualising the results - Histogram
###Code
plt.hist(ads_selected)
plt.title('Histogram of ads selections')
plt.xlabel('Ads')
plt.ylabel('Number of times each ad was selected')
plt.show()
###Output
_____no_output_____ |
chapter02_supervised-learning/linear-regression-gluon.ipynb | ###Markdown
``gluon``を利用した線形回帰前のチュートリアルでは、``mx.ndarray``と``mxnet.autograd``を利用して、ニューラルネットワークをゼロから実装しました。同じモデルをより少ない労力でできることを示したいと思います。もう一度、以前と同じパッケージをインポートします。今回は、``mxnet.gluon``をリストに追加しています。<!-- Now that we've implemented a whole neural network from scratch, using nothing but ``mx.ndarray`` and ``mxnet.autograd``, let's see how we can make the same model while doing a lot less work. Again, let's import some packages, this time adding ``mxnet.gluon`` to the list of dependencies. -->
###Code
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
###Output
_____no_output_____
###Markdown
Contextのセット大部分の計算がどこでなされるのかをgluonに伝えるために、今回もcontextを設定します。<!-- We'll also want to set a context to tell gluon where to do most of the computation. -->
###Code
data_ctx = mx.cpu()
model_ctx = mx.cpu()
###Output
_____no_output_____
###Markdown
データセットの構築前回の線形回帰の問題と同じ合成データを利用します。<!-- Again we'll look at the problem of linear regression and stick with the same synthetic data. -->
###Code
num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs))
noise = 0.01 * nd.random_normal(shape=(num_examples,))
y = real_fn(X) + noise
###Output
_____no_output_____
###Markdown
データイテレータのロードデータのバッチを扱うために、前回と同様に``DataLoader``を使用します。<!-- We'll stick with the ``DataLoader`` for handling our data batching. -->
###Code
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
モデルの定義何かをゼロから実装するときは、それぞれパラメータを用意して、それらの組み合わせでモデルを構成する必要がありました。ゼロから作る方法を学ぶの非常に良いことですが、`gluon`を使えば、定義済みのレイヤーからネットワークを構成することができます。線形モデルであれば、適切なレイヤは`Dense`と呼ばれるレイヤになります。*Dense(密な)*レイヤと呼ばれるのは、すべての入力のノードが次のレイヤの全てのノードに接続されるからです。先ほどの線形回帰の場合、入力以外に1つのレイヤしかもたず、そのレイヤはたった1つのノードしかもたないので、Denseという表現は言い過ぎのように感じるかもしれません。しかし、以降の章では複数の出力をもつ典型的なネットワークを扱うので、複数ノードの複数レイヤについて考えることもできるでしょう。線形モデルはたった1つの`Dense`レイヤをもち、たった1行のコードでそれを利用することができます。<!-- When we implemented things from scratch, we had to individually allocate parameters and then compose them together as a model. While it's good to know how to do things from scratch, with `gluon`, we can just compose a network from predefined layers. For a linear model, the appropriate layer is called `Dense`. It's called a *dense* layer because every node in the input is connected to every node in the subsequent layer. That description seems excessive because we only have one (non-input) layer here, and that layer only contains one node!But in subsequent chapters we'll typically work with networks that have multiple outputs, so we might as well start thinking in terms of layers of nodes. Because a linear model consists of just a single `Dense` layer, we can instantiate it with one line. -->[前回のノートブック](linear-regression-scratch.ipynb)では、2次元の入力から1次元の出力を得ました。もっとも直接的な方法は、入力の数と出力の数を指定して``Dense``を呼び出すことです。<!-- As in [the previous notebook](linear-regression-scratch.ipynb), we have an input dimension of 2 and an output dimension of 1. the most direct way to instantiate a ``Dense`` layer with these dimensionsis to specify the number of inputs and the number of outputs. -->
###Code
net = gluon.nn.Dense(1, in_units=2)
###Output
_____no_output_____
###Markdown
実装はこれだけです。これでニューラルネットワークを実装できました。前回のノートブックでゼロから作成したモデルと同様に、このモデルは重みの行列とバイアスのベクトルを持っています。<!-- That's it! We've already got a neural network. Like our hand-crafted model in the previous notebook, this model has a weight matrix and bias vector. -->
###Code
print(net.weight)
print(net.bias)
###Output
_____no_output_____
###Markdown
ここの`net.weight`や`net.bias`は実はNDArraysではありません。これらは`Parameter`クラスのインスタンスになります。いくつかの事情で、NDArraysに直接アクセスするかわりに`Parameter`を利用します。例えば、値を初期化する際に、便利で抽象的な方法を提供しています。NDArraysとは異なり、Patameterは複数のcontextに同時に関連付けることが可能です。このことは、複数GPUを利用して分散学習を考え始めるときに、必要になってきます。 <!-- Here, `net.weight` and `net.bias` are not actually NDArrays.They are instances of the `Parameter` class.We use `Parameter` instead of directly accessing NDAarrays for several reasons. For example, they provide convenient abstractions for initializing values.Unlike NDArrays, Parameters can be associated with multiple contexts simultaneously.This will come in handy in future chapters when we start thinking about distributed learning across multiple GPUs. -->`gluon`では、すべてのニューラルネットワークはBlock (`gluon.Block`)の組み合わせでできています。Blockは入力を受け取り出力を生成するユニットにすぎません。ブロックは、私達が更新できるパラメータを含んでいます。ここでは、私達が考えるネットワークは1つのレイヤだけをもっているので、パラメータに直接アクセスすることは非常に簡単です。もし10以上のレイヤで構成されたネットワークであれば、それは大変なものになるかもしれません。ネットワークがどれだけ複雑であろうとも、`collect_params()`を呼ぶことによって、以下のように全てのパラメータを取得することができます。<!-- In `gluon`, all neural networks are made out of Blocks (`gluon.Block`).Blocks are just units that take inputs and generate outputs.Blocks also contain parameters that we can update. Here, our network consists of only one layer, so it's convenient to access our parameters directly. When our networks consist of 10s of layers, this won't be so fun.No matter how complex our network, we can grab all its parameters by calling `collect_params()` as follows: -->
###Code
net.collect_params()
###Output
_____no_output_____
###Markdown
返ってきたobjectは`gluon.parameter.ParameterDict`です。これは、Parameter objectのグループを検索したり、加工したりするのに便利な抽象化されたものです。多くの場合、ニューラルネットワークのすべてのパラメータを取り出したくなると思います。<!-- The returned object is a `gluon.parameter.ParameterDict`. This is a convenient abstraction for retrieving and manipulating groups of Parameter objects.Most often, we'll want to retrieve all of the parameters in a neural network: -->
###Code
type(net.collect_params())
###Output
_____no_output_____
###Markdown
パラメータの初期化パラメータのデータやcontextにアクセスするためにはパラメータの初期化が必要です。初期化が終われば、出力を生成するためにニューラルネットワークにそのデータを入力することができるようになります。従って、初期化が済んでいない現時点では、まだ出力を生成することはできません。試しにモデルを実行する``net(nd.array([[0,1]]))``を呼び出してみると、以下のような、少し恐ろしいエラーメッセージに直面するでしょう。<!-- Once we initialize our Parameters, we can access their underlying data and context(s),and we can also feed data through the neural network to generate output.However, we can't get going just yet. If we try invoking your model by calling ``net(nd.array([[0,1]]))``, we'll confront the following hideous error message: -->```RuntimeError: Parameter dense1_weight has not been initialized...```これは、パラメータがどの*初期値*をもつべきかを、``gluon``に伝えていないからです。ParameterDictの``.initialize()``を呼び出すことでパラメータを初期化できます。初期化には2つの引数を用意する必要があります。* 1つ目はinitializerで、その多くは`mx.init`で動きます。* 2つ目はパラメータが動くcontextで`model_ctx`を渡します。多くの場合、GPUやGPUのリストになるでしょう。<!-- That's because we haven't yet told ``gluon`` what the *initial values* for our parameters should be!We initialize parameters by calling the `.initialize()` method of a ParameterDict. We'll need to pass in two arguments. * An initializer, many of which live in the `mx.init` module. * A context where the parameters should live. In this case we'll pass in the `model_ctx`. Most often this will either be a GPU or a list of GPUs. -->*MXNet*は``mxnet.init``のなかで様々な初期化の方法を提供しています。私達が前回構築したモデルと一致させるため、`mx.init.Normal(sigma=1.)`を利用して標準正規分布からのサンプリングでパラメータを初期化します。<!-- *MXNet* provides a variety of common initializers in ``mxnet.init``.To keep things consistent with the model we built by hand, we'll initialize each parameter by sampling from a standard normal distribution, using `mx.init.Normal(sigma=1.)`. -->
###Code
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
Deferred InitializationWhen we call ``initialize``, ``gluon`` associates each parameter with an initializer.However, the *actual initialization* is deferred until we make a first forward pass. In other words, the parameters are only initialized when they're needed. If we try to call `net.weight.data()` we'll get the following error:``DeferredInitializationError: Parameter dense2_weight has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters.``Passing data through a `gluon` model is easy. We just sample a batch of the appropriate shape and call `net` just as if it were a function. This will invoke `net`'s `forward()` method.
###Code
example_data = nd.array([[4,7]])
net(example_data)
###Output
_____no_output_____
###Markdown
Now that `net` is initialized, we can access each of its parameters.
###Code
print(net.weight.data())
print(net.bias.data())
###Output
[[-0.25217363 -0.04621419]]
<NDArray 1x2 @cpu(0)>
[ 0.]
<NDArray 1 @cpu(0)>
###Markdown
Shape inferenceRecall that previously, we instantiated our network with `gluon.nn.Dense(1, in_units=2)`. One slick feature that we can take advantage of in ``gluon`` is shape inference on parameters. Because our parameters never come into action until we pass data through the network,we don't actually have to declare the input dimension (`in_units`). Let's try this again, but letting `gluon` do more of the work:
###Code
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
We'll elaborate on this and more of ``gluon``'s internal workings in subsequent chapters. Define lossInstead of writing our own loss function we're just going to access squared error by instantiating ``gluon.loss.L2Loss``. Just like layers, and whole networks, a loss in gluon is just a `Block`.
###Code
square_loss = gluon.loss.L2Loss()
###Output
_____no_output_____
###Markdown
OptimizerInstead of writing stochastic gradient descent from scratch every time, we can instantiate a ``gluon.Trainer``, passing it a dictionary of parameters. Note that the ``SGD`` optimizer in ``gluon`` also has a few bells and whistles that you can turn on at will, including *momentum* and *clipping* (both are switched off by default). These modifications can help to converge faster and we'll discuss them later when we go over a variety of optimization algorithms in detail.
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})
###Output
_____no_output_____
###Markdown
Execute training loopYou might have noticed that it was a bit more concise to express our model in ``gluon``. For example, we didn't have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. The benefits of relying on ``gluon``'s abstractions will grow substantially once we start working with much more complex models. But once we have all the basic pieces in place, the training loop itself is quite similar to what we would do if implementing everything from scratch. To refresh your memory. For some number of ``epochs``, we'll make a complete pass over the dataset (``train_data``), grabbing one mini-batch of inputs and the corresponding ground-truth labels at a time. Then, for each batch, we'll go through the following ritual. So that this process becomes maximally ritualistic, we'll repeat it verbatim:* Generate predictions (``yhat``) and the loss (``loss``) by executing a forward pass through the network.* Calculate gradients by making a backwards pass through the network via ``loss.backward()``. * Update the model parameters by invoking our SGD optimizer (note that we need not tell ``trainer.step`` about which parameters but rather just the amount of data, since we already performed that in the initialization of ``trainer``).
###Code
epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
cumulative_loss = 0
# inner loop
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.mean(loss).asscalar()
print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
loss_sequence.append(cumulative_loss)
###Output
Epoch 0, loss: 3.44980202263
Epoch 1, loss: 2.10364257665
Epoch 2, loss: 1.28279426137
Epoch 3, loss: 0.782256319318
Epoch 4, loss: 0.477034088909
Epoch 5, loss: 0.290909814427
Epoch 6, loss: 0.177411796283
Epoch 7, loss: 0.108197494675
Epoch 8, loss: 0.0659899789031
Epoch 9, loss: 0.040249745576
###Markdown
Visualizing the learning curveNow let's check how quickly SGD learns the linear regression model by plotting the learning curve.
###Code
# plot the convergence of the estimated loss function
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)
###Output
_____no_output_____
###Markdown
As we can see, the loss function converges quickly to the optimal solution. Getting the learned model parametersAs an additional sanity check, since we generated the data from a Gaussian linear regression model, we want to make sure that the learner managed to recover the model parameters, which were set to weight $2,-3.4$ with an offset of $4.2$.
###Code
params = net.collect_params() # this returns a ParameterDict
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# therefore, here is how we can read off the parameters from it.
for param in params.values():
print(param.name,param.data())
###Output
The type of "params" is a <class 'mxnet.gluon.parameter.ParameterDict'>
dense5_weight
[[ 1.7913872 -3.10427046]]
<NDArray 1x2 @cpu(0)>
dense5_bias
[ 3.85259581]
<NDArray 1 @cpu(0)>
###Markdown
Linear regression with ``gluon``Now that we've implemented a whole neural network from scratch, using nothing but ``mx.ndarray`` and ``mxnet.autograd``, let's see how we can make the same model while doing a lot less work. Again, let's import some packages, this time adding ``mxnet.gluon`` to the list of dependencies.
###Code
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
###Output
_____no_output_____
###Markdown
Set the contextWe'll also want to set a context to tell gluon where to do most of the computation.
###Code
data_ctx = mx.cpu()
model_ctx = mx.cpu()
###Output
_____no_output_____
###Markdown
Build the datasetAgain we'll look at the problem of linear regression and stick with the same synthetic data.
###Code
num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs))
noise = 0.01 * nd.random_normal(shape=(num_examples,))
y = real_fn(X) + noise
###Output
_____no_output_____
###Markdown
Load the data iteratorWe'll stick with the ``DataLoader`` for handling our data batching.
###Code
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Define the modelWhen we implemented things from scratch, we had to individually allocate parameters and then compose them together as a model. While it's good to know how to do things from scratch, with `gluon`, we can just compose a network from predefined layers. For a linear model, the appropriate layer is called `Dense`. It's called a *dense* layer because every node in the input is connected to every node in the subsequent layer. That description seems excessive because we only have one (non-input) layer here, and that layer only contains one node!But in subsequent chapters we'll typically work with networks that have multiple outputs, so we might as well start thinking in terms of layers of nodes. Because a linear model consists of just a single `Dense` layer, we can instantiate it with one line.As in [the previous notebook](linear-regression-scratch.ipynb), we have an input dimension of 2 and an output dimension of 1. the most direct way to instantiate a ``Dense`` layer with these dimensionsis to specify the number of inputs and the number of outputs.
###Code
net = gluon.nn.Dense(1, in_units=2)
###Output
_____no_output_____
###Markdown
That's it! We've already got a neural network. Like our hand-crafted model in the previous notebook, this model has a weight matrix and bias vector.
###Code
print(net.weight)
print(net.bias)
###Output
_____no_output_____
###Markdown
Here, `net.weight` and `net.bias` are not actually NDArrays.They are instances of the `Parameter` class.We use `Parameter` instead of directly accessing NDAarrays for several reasons. For example, they provide convenient abstractions for initializing values.Unlike NDArrays, Parameters can be associated with multiple contexts simultaneously.This will come in handy in future chapters when we start thinking about distributed learning across multiple GPUs.In `gluon`, all neural networks are made out of Blocks (`gluon.Block`).Blocks are just units that take inputs and generate outputs.Blocks also contain parameters that we can update. Here, our network consists of only one layer, so it's convenient to access our parameters directly. When our networks consist of 10s of layers, this won't be so fun.No matter how complex our network, we can grab all its parameters by calling `collect_params()` as follows:
###Code
net.collect_params()
###Output
_____no_output_____
###Markdown
The returned object is a `gluon.parameter.ParameterDict`. This is a convenient abstraction for retrieving and manipulating groups of Parameter objects.Most often, we'll want to retrieve all of the parameters in a neural network:
###Code
type(net.collect_params())
###Output
_____no_output_____
###Markdown
Initialize parametersOnce we initialize our Parameters, we can access their underlying data and context(s),and we can also feed data through the neural network to generate output.However, we can't get going just yet. If we try invoking your model by calling ``net(nd.array([[0,1]]))``, we'll confront the following hideous error message:```RuntimeError: Parameter dense1_weight has not been initialized...```That's because we haven't yet told ``gluon`` what the *initial values* for our parameters should be!We initialize parameters by calling the `.initialize()` method of a ParameterDict. We'll need to pass in two arguments. * An initializer, many of which live in the `mx.init` module. * A context where the parameters should live. In this case we'll pass in the `model_ctx`. Most often this will either be a GPU or a list of GPUs. *MXNet* provides a variety of common initializers in ``mxnet.init``.To keep things consistent with the model we built by hand, we'll initialize each parameter by sampling from a standard normal distribution, using `mx.init.Normal(sigma=1.)`.
###Code
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
Deferred InitializationWhen we call ``initialize``, ``gluon`` associates each parameter with an initializer.However, the *actual initialization* is deferred until we make a first forward pass. In other words, the parameters are only initialized when they're needed. If we try to call `net.weight.data()` we'll get the following error:``DeferredInitializationError: Parameter dense2_weight has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters.``Passing data through a `gluon` model is easy. We just sample a batch of the appropriate shape and call `net` just as if it were a function. This will invoke `net`'s `forward()` method.
###Code
example_data = nd.array([[4,7]])
net(example_data)
###Output
_____no_output_____
###Markdown
Now that `net` is initialized, we can access each of its parameters.
###Code
print(net.weight.data())
print(net.bias.data())
###Output
[[-0.25217363 -0.04621419]]
<NDArray 1x2 @cpu(0)>
[ 0.]
<NDArray 1 @cpu(0)>
###Markdown
Shape inferenceRecall that previously, we instantiated our network with `gluon.nn.Dense(1, in_units=2)`. One slick feature that we can take advantage of in ``gluon`` is shape inference on parameters. Because our parameters never come into action until we pass data through the network,we don't actually have to declare the input dimension (`in_units`). Let's try this again, but letting `gluon` do more of the work:
###Code
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
We'll elaborate on this and more of ``gluon``'s internal workings in subsequent chapters. Define lossInstead of writing our own loss function we're just going to access squared error by instantiating ``gluon.loss.L2Loss``. Just like layers, and whole networks, a loss in gluon is just a `Block`.
###Code
square_loss = gluon.loss.L2Loss()
###Output
_____no_output_____
###Markdown
OptimizerInstead of writing stochastic gradient descent from scratch every time, we can instantiate a ``gluon.Trainer``, passing it a dictionary of parameters. Note that the ``SGD`` optimizer in ``gluon`` also has a few bells and whistles that you can turn on at will, including *momentum* and *clipping* (both are switched off by default). These modifications can help to converge faster and we'll discuss them later when we go over a variety of optimization algorithms in detail.
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})
###Output
_____no_output_____
###Markdown
Execute training loopYou might have noticed that it was a bit more concise to express our model in ``gluon``. For example, we didn't have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. The benefits of relying on ``gluon``'s abstractions will grow substantially once we start working with much more complex models. But once we have all the basic pieces in place, the training loop itself is quite similar to what we would do if implementing everything from scratch. To refresh your memory. For some number of ``epochs``, we'll make a complete pass over the dataset (``train_data``), grabbing one mini-batch of inputs and the corresponding ground-truth labels at a time. Then, for each batch, we'll go through the following ritual. So that this process becomes maximally ritualistic, we'll repeat it verbatim:* Generate predictions (``yhat``) and the loss (``loss``) by executing a forward pass through the network.* Calculate gradients by making a backwards pass through the network via ``loss.backward()``. * Update the model parameters by invoking our SGD optimizer (note that we need not tell ``trainer.step`` about which parameters but rather just the amount of data, since we already performed that in the initialization of ``trainer``).
###Code
epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
cumulative_loss = 0
# inner loop
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.mean(loss).asscalar()
print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
loss_sequence.append(cumulative_loss)
###Output
Epoch 0, loss: 3.44980202263
Epoch 1, loss: 2.10364257665
Epoch 2, loss: 1.28279426137
Epoch 3, loss: 0.782256319318
Epoch 4, loss: 0.477034088909
Epoch 5, loss: 0.290909814427
Epoch 6, loss: 0.177411796283
Epoch 7, loss: 0.108197494675
Epoch 8, loss: 0.0659899789031
Epoch 9, loss: 0.040249745576
###Markdown
Visualizing the learning curveNow let's check how quickly SGD learns the linear regression model by plotting the learning curve.
###Code
# plot the convergence of the estimated loss function
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)
###Output
_____no_output_____
###Markdown
As we can see, the loss function converges quickly to the optimal solution. Getting the learned model parametersAs an additional sanity check, since we generated the data from a Gaussian linear regression model, we want to make sure that the learner managed to recover the model parameters, which were set to weight $2,-3.4$ with an offset of $4.2$.
###Code
params = net.collect_params() # this returns a ParameterDict
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# therefore, here is how we can read off the parameters from it.
for param in params.values():
print(param.name,param.data())
###Output
The type of "params" is a <class 'mxnet.gluon.parameter.ParameterDict'>
dense5_weight
[[ 1.7913872 -3.10427046]]
<NDArray 1x2 @cpu(0)>
dense5_bias
[ 3.85259581]
<NDArray 1 @cpu(0)>
###Markdown
Linear regression with ``gluon``Now that we've implemented a whole neural network from scratch, using nothing but ``mx.ndarray`` and ``mxnet.autograd``, let's see how we can make the same model while doing a lot less work. Again, let's import some packages, this time adding ``mxnet.gluon`` to the list of dependencies.
###Code
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
###Output
_____no_output_____
###Markdown
Set the contextWe'll also want to set a context to tell gluon where to do most of the computation.
###Code
data_ctx = mx.cpu()
model_ctx = mx.cpu()
###Output
_____no_output_____
###Markdown
Build the datasetAgain we'll look at the problem of linear regression and stick with the same synthetic data.
###Code
num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs))
noise = 0.01 * nd.random_normal(shape=(num_examples,))
y = real_fn(X) + noise
###Output
_____no_output_____
###Markdown
Load the data iteratorWe'll stick with the ``DataLoader`` for handling out data batching.
###Code
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Define the modelWhen we implemented things from scratch, we had to individually allocate parameters and then compose them together as a model. While it's good to know how to do things from scratch, with `gluon`, we can just compose a network from predefined layers. For a linear model, the appropriate layer is called `Dense`. It's called a *dense* layer because every node in the input is connected to every node in the subsequent layer. That description seems excessive because we only have one (non-input) layer here, and that layer only contains one node!But in subsequent chapters we'll typically work with networks that have multiple outputs, so we might as well start thinking in terms of layers of nodes. Because a linear model consists of just a single `Dense` layer, we can instantiate it with one line.As in [the previous notebook](linear-regression-scratch.ipynb), we have an inputdimension of 2 and an output dimension of 1. the most direct way to instantiate a ``Dense`` layer with these dimensionsis to specify the number of inputs and the number of outputs.
###Code
net = gluon.nn.Dense(1, in_units=2)
###Output
_____no_output_____
###Markdown
That's it! We've already got a neural network. Like our hand-crafted model in the previous notebook, this model has a weight matrix and bias vector.
###Code
print(net.weight)
print(net.bias)
###Output
_____no_output_____
###Markdown
Here, `net.weight` and `net.bias` are not actually NDArrays.They are instances of the `Parameter` class.We use `Parameter` instead of directly accessing NDAarrays for several reasons. For example, they provide convenient abstractions for initializing values.Unlike NDArrays, Parameters can be associated with multiple contexts simultaneously.This will come in handy in future chapters when we start thinking about distributed learning across multiple GPUs.In `gluon`, all neural networks are made out of Blocks (`gluon.Block`).Blocks are just units that take inputs and generate outputs.Blocks also contain parameters that we can update. Here, our network consists of only one layer, so it's convenient to access our parameters directly. When our networks consist of 10s of layers, this won't be so fun.No matter how complex our network, we can grab all its parameters by calling `collect_params()` as follows:
###Code
net.collect_params()
###Output
_____no_output_____
###Markdown
The returned object is a `gluon.parameter.ParameterDict`. This is a convenient abstraction for retrieving and manipulating groups of Parameter objects.Most often, we'll want to retrieve all of the parameters in a neural network:
###Code
type(net.collect_params())
###Output
_____no_output_____
###Markdown
Initialize parametersOnce we initialize our Parameters, we can access their underlying data and context(s),and we can also feed data through the neural network to generate output.However, we can't get going just yet. If we try invoking your model by calling ``net(nd.array([[0,1]]))``, we'll confront the following hideous error message:```RuntimeError: Parameter dense1_weight has not been initialized...```That's because we haven't yet told ``gluon`` what the *initial values* for our parameters should be!We initialize parameters by calling the `.initialize()` method of a ParameterDict. We'll need to pass in two arguments. * An initializer, many of which live in the `mx.init` module. * A context where the parameters should live. In this case we'll pass in the `model_ctx`. Most often this will either be a GPU or a list of GPUs. *MXNet* provides a variety of common initializers in ``mxnet.init``.To keep things consistent with the model we built by hand, we'll initialize each parameter by sampling from a standard normal distribution, using `mx.init.Normal(sigma=1.)`.
###Code
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
Deferred InitializationWhen we call ``initialize``, ``gluon`` associates each parameter with an initializer.However, the *actual initialization* is deferred until we make a first forward pass. In other words, the parameters are only initialized when they're needed. If we try to call `net.weight.data()` we'll get the following error:``DeferredInitializationError: Parameter dense2_weight has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters.``Passing data through a `gluon` model is easy. We just sample a batch of the appropriate shape and call `net` just as if it were a function. This will invoke net's `forward()` method.
###Code
example_data = nd.array([[4,7]])
net(example_data)
###Output
_____no_output_____
###Markdown
Now that `net` is initialized, we can access each of its parameters.
###Code
print(net.weight.data())
print(net.bias.data())
###Output
[[-0.25217363 -0.04621419]]
<NDArray 1x2 @cpu(0)>
[ 0.]
<NDArray 1 @cpu(0)>
###Markdown
Shape inferenceRecall that previously, we instantiated our network with `gluon.nn.Dense(1, in_units=2)`. One slick feature that we can take advantage of in ``gluon`` is shape inference on parameters. Because our parameters never come into action until we pass data through the network,we don't actually have to declare the input dimension (`in_units`). Let's try this again, but letting `gluon` do more of the work:
###Code
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
We'll elaborate on this and more of ``gluon``'s internal workings in subsequent chapters. Define lossInstead of writing our own loss function we're just going to access squared error by instantiating ``gluon.loss.L2Loss``. Just like layers, and whole networks, a loss in gluon is just a `Block`.
###Code
square_loss = gluon.loss.L2Loss()
###Output
_____no_output_____
###Markdown
OptimizerInstead of writing stochastic gradient descent from scratch every time, we can instantiate a ``gluon.Trainer``, passing it a dictionary of parameters. Note that the ``sgd`` optimizer in ``gluon`` actually uses SGD with momentum and clipping (both can be switched off if needed), since these modifications make it converge rather much better. We will discuss this later when we go over a range of optimization algorithms in detail.
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})
###Output
_____no_output_____
###Markdown
Execute training loopYou might have noticed that it was a bit more concise to express our model in ``gluon``. For example, we didn't have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. The benefits of relying on ``gluon``'s abstractions will grow substantially once we start working with much more complex models. But once we have all the basic pieces in place, the training loop itself is quite similar to what we would do if implementing everything from scratch. To refresh your memory. For some number of ``epochs``, we'll make a complete pass over the dataset (``train_data``), grabbing one mini-batch of inputs and the corresponding ground-truth labels at a time. Then, for each batch, we'll go through the following ritual. So that this process becomes maximally ritualistic, we'll repeat it verbatim:* Generate predictions (``yhat``) and the loss (``loss``) by executing a forward pass through the network.* Calculate gradients by making a backwards pass through the network via ``loss.backward()``. * Update the model parameters by invoking our SGD optimizer (note that we need not tell ``trainer.step`` about which parameters but rather just the amount of data, since we already performed that in the initialization of ``trainer``).
###Code
epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
cumulative_loss = 0
# inner loop
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.mean(loss).asscalar()
print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
loss_sequence.append(cumulative_loss)
###Output
Epoch 0, loss: 3.44980202263
Epoch 1, loss: 2.10364257665
Epoch 2, loss: 1.28279426137
Epoch 3, loss: 0.782256319318
Epoch 4, loss: 0.477034088909
Epoch 5, loss: 0.290909814427
Epoch 6, loss: 0.177411796283
Epoch 7, loss: 0.108197494675
Epoch 8, loss: 0.0659899789031
Epoch 9, loss: 0.040249745576
###Markdown
Visualizing the learning curveNow let's check how quickly SGD learns the linear regression model by plotting the learning curve.
###Code
# plot the convergence of the estimated loss function
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)
###Output
_____no_output_____
###Markdown
As we can see, the loss function converges quickly to the optimal solution. Getting the learned model parametersAs an additional sanity check, since we generated the data from a Gaussian linear regression model, we want to make sure that the learner managed to recover the model parameters, which were set to weight $2,-3.4$ with an offset of $4.2$.
###Code
params = net.collect_params() # this returns a ParameterDict
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# therefore, here is how we can read off the parameters from it.
for param in params.values():
print(param.name,param.data())
###Output
The type of "params" is a <class 'mxnet.gluon.parameter.ParameterDict'>
dense5_weight
[[ 1.7913872 -3.10427046]]
<NDArray 1x2 @cpu(0)>
dense5_bias
[ 3.85259581]
<NDArray 1 @cpu(0)>
###Markdown
Linear regression with ``gluon``Now that we've implemented a whole neural network from scratch, using nothing but ``mx.ndarray`` and ``mxnet.autograd``, let's see how we can make the same model while doing a lot less work. Again, let's import some packages, this time adding ``mxnet.gluon`` to the list of dependencies.
###Code
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
###Output
_____no_output_____
###Markdown
Set the contextWe'll also want to set a context to tell gluon where to do most of the computation.
###Code
data_ctx = mx.cpu()
model_ctx = mx.cpu()
###Output
_____no_output_____
###Markdown
Build the datasetAgain we'll look at the problem of linear regression and stick with the same synthetic data.
###Code
num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs))
noise = 0.01 * nd.random_normal(shape=(num_examples,))
y = real_fn(X) + noise
###Output
_____no_output_____
###Markdown
Load the data iteratorWe'll stick with the ``DataLoader`` for handling our data batching.
###Code
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Define the modelWhen we implemented things from scratch, we had to individually allocate parameters and then compose them together as a model. While it's good to know how to do things from scratch, with `gluon`, we can just compose a network from predefined layers. For a linear model, the appropriate layer is called `Dense`. It's called a *dense* layer because every node in the input is connected to every node in the subsequent layer. That description seems excessive because we only have one (non-input) layer here, and that layer only contains one node!But in subsequent chapters we'll typically work with networks that have multiple outputs, so we might as well start thinking in terms of layers of nodes. Because a linear model consists of just a single `Dense` layer, we can instantiate it with one line.As in [the previous notebook](linear-regression-scratch.ipynb), we have an input dimension of 2 and an output dimension of 1. the most direct way to instantiate a ``Dense`` layer with these dimensionsis to specify the number of inputs and the number of outputs.
###Code
net = gluon.nn.Dense(1, in_units=2)
###Output
_____no_output_____
###Markdown
That's it! We've already got a neural network. Like our hand-crafted model in the previous notebook, this model has a weight matrix and bias vector.
###Code
print(net.weight)
print(net.bias)
###Output
_____no_output_____
###Markdown
Here, `net.weight` and `net.bias` are not actually NDArrays.They are instances of the `Parameter` class.We use `Parameter` instead of directly accessing NDAarrays for several reasons. For example, they provide convenient abstractions for initializing values.Unlike NDArrays, Parameters can be associated with multiple contexts simultaneously.This will come in handy in future chapters when we start thinking about distributed learning across multiple GPUs.In `gluon`, all neural networks are made out of Blocks (`gluon.Block`).Blocks are just units that take inputs and generate outputs.Blocks also contain parameters that we can update. Here, our network consists of only one layer, so it's convenient to access our parameters directly. When our networks consist of 10s of layers, this won't be so fun.No matter how complex our network, we can grab all its parameters by calling `collect_params()` as follows:
###Code
net.collect_params()
###Output
_____no_output_____
###Markdown
The returned object is a `gluon.parameter.ParameterDict`. This is a convenient abstraction for retrieving and manipulating groups of Parameter objects.Most often, we'll want to retrieve all of the parameters in a neural network:
###Code
type(net.collect_params())
###Output
_____no_output_____
###Markdown
Initialize parametersOnce we initialize our Parameters, we can access their underlying data and context(s),and we can also feed data through the neural network to generate output.However, we can't get going just yet. If we try invoking your model by calling ``net(nd.array([[0,1]]))``, we'll confront the following hideous error message:```RuntimeError: Parameter dense1_weight has not been initialized...```That's because we haven't yet told ``gluon`` what the *initial values* for our parameters should be!We initialize parameters by calling the `.initialize()` method of a ParameterDict. We'll need to pass in two arguments. * An initializer, many of which live in the `mx.init` module. * A context where the parameters should live. In this case we'll pass in the `model_ctx`. Most often this will either be a GPU or a list of GPUs. *MXNet* provides a variety of common initializers in ``mxnet.init``.To keep things consistent with the model we built by hand, we'll initialize each parameter by sampling from a standard normal distribution, using `mx.init.Normal(sigma=1.)`.
###Code
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
Deferred InitializationWhen we call ``initialize``, ``gluon`` associates each parameter with an initializer.However, the *actual initialization* is deferred until we make a first forward pass. In other words, the parameters are only initialized when they're needed. If we try to call `net.weight.data()` we'll get the following error:``DeferredInitializationError: Parameter dense2_weight has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters.``Passing data through a `gluon` model is easy. We just sample a batch of the appropriate shape and call `net` just as if it were a function. This will invoke `net`'s `forward()` method.
###Code
example_data = nd.array([[4,7]])
net(example_data)
###Output
_____no_output_____
###Markdown
Now that `net` is initialized, we can access each of its parameters.
###Code
print(net.weight.data())
print(net.bias.data())
###Output
[[-0.25217363 -0.04621419]]
<NDArray 1x2 @cpu(0)>
[ 0.]
<NDArray 1 @cpu(0)>
###Markdown
Shape inferenceRecall that previously, we instantiated our network with `gluon.nn.Dense(1, in_units=2)`. One slick feature that we can take advantage of in ``gluon`` is shape inference on parameters. Because our parameters never come into action until we pass data through the network,we don't actually have to declare the input dimension (`in_units`). Let's try this again, but letting `gluon` do more of the work:
###Code
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
We'll elaborate on this and more of ``gluon``'s internal workings in subsequent chapters. Define lossInstead of writing our own loss function we're just going to access squared error by instantiating ``gluon.loss.L2Loss``. Just like layers, and whole networks, a loss in gluon is just a `Block`.
###Code
square_loss = gluon.loss.L2Loss()
###Output
_____no_output_____
###Markdown
OptimizerInstead of writing stochastic gradient descent from scratch every time, we can instantiate a ``gluon.Trainer``, passing it a dictionary of parameters. Note that the ``SGD`` optimizer in ``gluon`` also has a few bells and whistles that you can turn on at will, including *momentum* and *clipping* (both are switched off by default). These modifications can help to converge faster and we'll discuss them later when we go over a variety of optimization algorithms in detail.
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})
###Output
_____no_output_____
###Markdown
Execute training loopYou might have noticed that it was a bit more concise to express our model in ``gluon``. For example, we didn't have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. The benefits of relying on ``gluon``'s abstractions will grow substantially once we start working with much more complex models. But once we have all the basic pieces in place, the training loop itself is quite similar to what we would do if implementing everything from scratch. To refresh your memory. For some number of ``epochs``, we'll make a complete pass over the dataset (``train_data``), grabbing one mini-batch of inputs and the corresponding ground-truth labels at a time. Then, for each batch, we'll go through the following ritual. So that this process becomes maximally ritualistic, we'll repeat it verbatim:* Generate predictions (``yhat``) and the loss (``loss``) by executing a forward pass through the network.* Calculate gradients by making a backwards pass through the network via ``loss.backward()``. * Update the model parameters by invoking our SGD optimizer (note that we need not tell ``trainer.step`` about which parameters but rather just the amount of data, since we already performed that in the initialization of ``trainer``).
###Code
epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
cumulative_loss = 0
# inner loop
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.mean(loss).asscalar()
print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
loss_sequence.append(cumulative_loss)
###Output
Epoch 0, loss: 3.44980202263
Epoch 1, loss: 2.10364257665
Epoch 2, loss: 1.28279426137
Epoch 3, loss: 0.782256319318
Epoch 4, loss: 0.477034088909
Epoch 5, loss: 0.290909814427
Epoch 6, loss: 0.177411796283
Epoch 7, loss: 0.108197494675
Epoch 8, loss: 0.0659899789031
Epoch 9, loss: 0.040249745576
###Markdown
Visualizing the learning curveNow let's check how quickly SGD learns the linear regression model by plotting the learning curve.
###Code
# plot the convergence of the estimated loss function
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)
###Output
_____no_output_____
###Markdown
As we can see, the loss function converges quickly to the optimal solution. Getting the learned model parametersAs an additional sanity check, since we generated the data from a Gaussian linear regression model, we want to make sure that the learner managed to recover the model parameters, which were set to weight $2,-3.4$ with an offset of $4.2$.
###Code
params = net.collect_params() # this returns a ParameterDict
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# therefore, here is how we can read off the parameters from it.
for param in params.values():
print(param.name,param.data())
###Output
The type of "params" is a <class 'mxnet.gluon.parameter.ParameterDict'>
dense5_weight
[[ 1.7913872 -3.10427046]]
<NDArray 1x2 @cpu(0)>
dense5_bias
[ 3.85259581]
<NDArray 1 @cpu(0)>
|
ipynb/Germany-Brandenburg-LK-Uckermark.ipynb | ###Markdown
Germany: LK Uckermark (Brandenburg)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Uckermark", weeks=5);
overview(country="Germany", subregion="LK Uckermark");
compare_plot(country="Germany", subregion="LK Uckermark", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Uckermark")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Uckermark (Brandenburg)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Uckermark", weeks=5);
overview(country="Germany", subregion="LK Uckermark");
compare_plot(country="Germany", subregion="LK Uckermark", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Uckermark")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Uckermark")
print(f'Population of country="Germany", subregion="LK Uckermark": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Uckermark (Brandenburg)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Uckermark");
# load the data
cases, deaths, region_label = germany_get_region(landkreis="LK Uckermark")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Brandenburg-LK-Uckermark.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
notebooks/Episode 6 Visualisation.ipynb | ###Markdown
Graphing with Pandas2016-11-18https://data-lessons.github.io/library-python/06a-plotting-with-pandas/
###Code
%pylab inline
import pandas as pd
articles_df = pd.read_csv("articles.csv")
small_dataset = articles_df[:50]
ax = small_dataset.Author_Count.plot(title='Number of Authors')
fig = ax.get_figure()
fig.savefig('myplot.pdf')
fig.savefig('myplot.png')
ax = small_dataset.Author_Count.plot(title='', figsize=(6, 3))
plt.xlabel('Index')
plt.ylabel('Author Count')
plt.savefig('myplot2.pdf', dpi=200, bbox_inches='tight')
ax = small_dataset.Author_Count.plot.bar(title='', figsize=(10, 3), color='#aa5599')
plt.xlabel('Index')
plt.ylabel('Author Count')
plt.savefig('myplot_bar.pdf', dpi=200, bbox_inches='tight')
ax = small_dataset.Author_Count.plot(title='', figsize=(6, 3), style='o', marker='+')
plt.xlabel('Index')
plt.ylabel('Author Count')
ax = small_dataset.Author_Count.plot(title='',
figsize=(10, 3),
color='#aa5599',
xlim=(10, 20), legend=True)
plt.xlabel('Index')
plt.ylabel('Author Count')
ax1 = small_dataset.Author_Count.plot(color='g')
ax2 = ax1.twinx()
small_dataset.Citation_Count.plot(color='r', ax=ax2)
ax1.set_ylabel('Author count')
ax2.set_ylabel('Citation count')
by_month = articles_df.groupby('Month')
ax = by_month.Title.count().plot(kind='bar',
color='green',
title='Article count per month')
ax.set_ylabel('Number of articles')
ax = articles_df.boxplot(column=['Author_Count', 'Citation_Count'],
by='LanguageId')
articles_df.plot()
###Output
_____no_output_____ |
notebooks/Learning Units/Getting Started/Regression.ipynb | ###Markdown
Regression Regression is a supervised task where a model maps input to a continuous output. More formally, a regression problem can be defined as learning a function $f$ that will map input variables $X = x_0, x_1,\dots, x_{m-1}, x_{m}$ to a continuous target variable $y$ such that $f(x) = y$.So for instance, let's say that we have the following data:| Variable 1 | Variable 1 | Variable 3 | Variable 4 | Target variable ||------------|------------|------------|------------|:---------------:|| 1 | 2 | 3 | 4 | 10 || 2 | 3 | 4 | 5 | 14 || 3 | 4 | 5 | 6 | 18 || ... | ... | ... | ... | ... || 2000 | 2001 | 2002 | 2003 | 8006 |We would want to learn some function such that $f(1,2,3,4) = 10$ and $f(2,3,4,5) = 14$ and so on.Regression is often compared to curve fitting, since it is trying to fit some function $f$ that will follow a similar curve as the data. For instance:  Evaluating regressionMachine learning is all about getting better and better at a task. Therefore, we need to define what it means to be _good_.For instance, given the output of different models compared to the target variable, which model would you say is better, and why?| Target | 0.55 | 0.72 | 0.6 | 0.54 | 0.42 | 0.65 | 0.44 | 0.89 | 0.96 | 0.38 ||:-------:|:----------:|:----------:|:----------:|:-----:|:-:|:-:|:-:|:-:|| Model A | 0.69 | 2.17 | 1.36 | 0.66 | 0.86 | 0.98 | 1.93 | 0.68 | 1.27 | -0.47 || Model B | -1.36 | 1.21 | 1.25 | -0.02 | 2.12 | -0.44 | 0.47 | 0.75 | 2.11 | 1.48 || Model C |0.59 | 0.81 | 0.38 | 0.04 | 0.33 | 0.69 | 0.75 | 1.19 | 0.86 | 0.3 || Model D |0.03 | 0.01 | -0.25 | 1.52 | 0.17 | 0.43 | -0.19 | 1.28 | 0.15 | 0.27 || Model E | 0.1 | 0.91 | 0.34 | -0.05 | 0.41 | 0.86 | 0.47 | 1.04 | 0.64 | 0.2 |This might be difficult to tell, especially if there are more models and predictions. Thankfully, there exists several commonly-used metrics to tackle this problem. Let's use the data from the table as example.
###Code
import numpy as np
target = np.array([0.55, 0.72, 0.6, 0.54, 0.42, 0.65, 0.44, 0.89, 0.96, 0.38])
predictions = {"A": np.array([0.69, 2.17, 1.36, 0.66, 0.86, 0.98, 1.93, 0.68, 1.27, -0.47]),
"B": np.array([-1.36, 1.21, 1.25, -0.02, 2.12, -0.44, 0.47, 0.75, 2.11, 1.48]),
"C": np.array([0.59, 0.81, 0.38, 0.04, 0.33, 0.69, 0.75, 1.19, 0.86, 0.3]),
"D": np.array([0.03, 0.01, -0.25, 1.52, 0.17, 0.43, -0.19, 1.28, 0.15, 0.27]),
"E": np.array([0.1, 0.91, 0.34, -0.05, 0.41, 0.86, 0.47, 1.04, 0.64, 0.2])}
###Output
_____no_output_____
###Markdown
Mean-Squared Error$ MSE = \frac{1}{n} \sum_{i = 0}^{n} (\hat{Y_i} - Y_i)^2$The mean-squared error is probably the most commonly used metric for regression. It is often set as the default metric in many machine learning packages.It is defined as the average of the square of the errors. It loosely means that large errors are proportionally _worse_ than small mistakes.
###Code
def MSE(predicted_target, target):
errors = predicted_target - target
return np.mean(errors**2)
for model_name, predicted_target in predictions.items():
print(f"{model_name}: {MSE(predicted_target, target):.4f}")
###Output
A: 0.6099
B: 1.1255
C: 0.0520
D: 0.3785
E: 0.0857
###Markdown
Root Mean-Squared Error$ RMSE = \sqrt{\frac{1}{n} \sum_{i = 0}^{n} (\hat{Y_i} - Y_i)^2}$The root mean-squared error is related to the mean squared error. It is simply the square root of the former metric. It has the advantage of being of the same units as the target variable. Therefore, it can be easily interpreted as the average distance of the output to the target.
###Code
def RMSE(predicted_target, target):
return np.sqrt(MSE(predicted_target, target))
for model_name, predicted_target in predictions.items():
print(f"{model_name}: {RMSE(predicted_target, target):.4f}")
###Output
A: 0.7810
B: 1.0609
C: 0.2281
D: 0.6153
E: 0.2927
###Markdown
Mean Absolute Error$ MAE = \frac{1}{n} \sum_{i = 0}^{n} |\hat{Y_i} - Y_i|$As opposed to the mean-squared error, the mean absolute error views all errors as proportionally as bad and therefore, large errors are not penalized more.
###Code
def MAE(output, target):
errors = output - target
return np.mean(np.abs(errors))
for model_name, predicted_target in predictions.items():
print(f"{model_name}: {MAE(predicted_target, target):.4f}")
###Output
A: 0.6100
B: 0.8820
C: 0.1770
D: 0.5470
E: 0.2390
###Markdown
R Squared$ R^{2} = 1 - \frac{\sum_{i=0}^{n} (Y_i - \hat{Y_i})^2}{\sum_{i=0}^{n} (Y_i - \sum_{i=0}^n Y_i)^2}$R squared is also often referred to as the coefficient of determination, or the explained variance. It represents how much of the target's variance can be explained by the data. 1 is best, lower is worse
###Code
def RSquared(predicted_target, target):
numerator = np.sum((target - predicted_target)**2)
denominator = np.sum((target - np.mean(target))**2)
return 1.0 - (numerator / denominator)
for model_name, predicted_target in predictions.items():
print(f"{model_name}: {RSquared(predicted_target, target):.4f}")
###Output
A: -16.8947
B: -32.0216
C: -0.5265
D: -10.1061
E: -1.5134
|
tutorials/W2D2_LinearSystems/W2D2_Tutorial2.ipynb | ###Markdown
Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom, Ethan Cheng **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 45 minutes*In this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters.
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/snv4m/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
# @title Video 1: Markov Process
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11C4y1h7Eu", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This video covers a definition of Markov processes and an introduction to ion channels opening/closing as an example of a telegraph process.Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. You have seen the Poisson process in the [pre-reqs statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial1.html). The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches.You briefly saw a Markov process in the [pre-reqs statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.htmlsection-1-2-markov-chains). Run the cell below to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @markdown Execute to simulate state changes
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Computing intervals between switches*Referred to in video as exercise 2A*We now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals
raise NotImplementedError("Student exercise: need to calculate switch intervals")
##############################################################################
# hint: see np.diff()
inter_switch_intervals = ...
# plot inter-switch intervals
plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
In the next cell, we generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation.
###Code
# @markdown Execute cell to visualize distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time.
###Code
# @markdown Execute to visualize cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo 1: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. 1. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? 2. How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective*Estimated timing to here from start of tutorial: 18 min*
###Code
# @title Video 2: State Transitions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uk4y1B7ru", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This video serves as an introduction to the telegraph process of ion channels opening/closing with an alternative formulation as a matrix/vector representation of probabilistic state transitions. Click here for text recap of video We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Coding Exercise 2: Probability Propagation*Referred to in video as exercise 2B*Complete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# Set parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# Initial condition: start as Closed
x0 = np.array([[1, 0]])
# Simulate probabilities propagation
x, t = simulate_prob_prop(A, x0, dt, T)
# Visualize
plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# Set parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# Initial condition: start as Closed
x0 = np.array([[1, 0]])
# Simulate probabilities propagation
x, t = simulate_prob_prop(A, x0, dt, T)
# Visualize
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process*Estimated timing to here from start of tutorial: 30 min*
###Code
# @title Video 3: Continous vs. Discrete Time Formulation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1di4y1g7Yc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Think! 3: Finding a stable state1. Which of these eigenvalues corresponds to the **stable** (equilibrium) solution? 2. What is the eigenvector of this eigenvalue? 3. How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
# to_remove explanation
"""
1) Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
2) The eigenvector corresponding to this eigenvalue is the stable solution.
3) To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV11C4y1h7Eu', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV11C4y1h7Eu
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1uk4y1B7ru', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1uk4y1B7ru
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1di4y1g7Yc', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1di4y1g7Yc
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigenvalues: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
# @title Video 1: Markov Process
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11C4y1h7Eu", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
# @title Video 2: State Transitions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uk4y1B7ru", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
# @title Video 3: Continous vs. Discrete Time Formulation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1di4y1g7Yc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/xZO6GbU48ns
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/U6YRhLuRhHg
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/csetTTauIh8
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigenvalues: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/W2D2_Tutorial2.ipynb) Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
# @title Video 1: Markov Process
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11C4y1h7Eu", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
# @title Video 2: State Transitions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uk4y1B7ru", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
# @title Video 3: Continous vs. Discrete Time Formulation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1di4y1g7Yc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom, Ethan Cheng **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 45 minutes*In this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters.
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/snv4m/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
# @title Video 1: Markov Process
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11C4y1h7Eu", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This video covers a definition of Markov processes and an introduction to ion channels opening/closing as an example of a telegraph process.Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. You have seen the Poisson process in the [pre-reqs statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial1.html). The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches.You briefly saw a Markov process in the [pre-reqs statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.htmlsection-1-2-markov-chains). Run the cell below to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @markdown Execute to simulate state changes
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Computing intervals between switches*Referred to in video as exercise 2A*We now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals
raise NotImplementedError("Student exercise: need to calculate switch intervals")
##############################################################################
# hint: see np.diff()
inter_switch_intervals = ...
# plot inter-switch intervals
plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
In the next cell, we generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation.
###Code
# @markdown Execute cell to visualize distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time.
###Code
# @markdown Execute to visualize cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo 1: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. 1. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? 2. How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective*Estimated timing to here from start of tutorial: 18 min*
###Code
# @title Video 2: State Transitions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uk4y1B7ru", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This video serves as an introduction to the telegraph process of ion channels opening/closing with an alternative formulation as a matrix/vector representation of probabilistic state transitions. Click here for text recap of video We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Coding Exercise 2: Probability Propagation*Referred to in video as exercise 2B*Complete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# Set parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# Initial condition: start as Closed
x0 = np.array([[1, 0]])
# Simulate probabilities propagation
x, t = simulate_prob_prop(A, x0, dt, T)
# Visualize
plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# Set parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# Initial condition: start as Closed
x0 = np.array([[1, 0]])
# Simulate probabilities propagation
x, t = simulate_prob_prop(A, x0, dt, T)
# Visualize
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process*Estimated timing to here from start of tutorial: 30 min*
###Code
# @title Video 3: Continous vs. Discrete Time Formulation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1di4y1g7Yc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Think! 3: Finding a stable state1. Which of these eigenvalues corresponds to the **stable** (equilibrium) solution? 2. What is the eigenvector of this eigenvalue? 3. How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
# to_remove explanation
"""
1) Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
2) The eigenvector corresponding to this eigenvalue is the stable solution.
3) To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/xZO6GbU48ns
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 50000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
# np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/U6YRhLuRhHg
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = A @ x[k, :].T
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack([x, x_kp1.T])
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/csetTTauIh8
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigenvalues: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
Tutorial 2: Markov Processes**Week 2, Day 2: Linear Systems****By Neuromatch Academy****Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/xZO6GbU48ns
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
# raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = A @ x.T[:, -1]
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
plot_state_probabilities(t,x)
x0.T[:,-1]
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigenvalues: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV11C4y1h7Eu', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtu.be/xZO6GbU48ns
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1uk4y1B7ru', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtu.be/U6YRhLuRhHg
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1di4y1g7Yc', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtu.be/csetTTauIh8
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigenvalues: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
Neuromatch Academy 2020, W2D2 Tutorial 2 Markov Processes**DRAFT: 2020-06-29**, Bing Wen Bruntonwith contributions by Ellie Strandquist Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process,elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h),'font.size': 16})
%config InlineBackend.figure_format = 'retina'
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure(figsize=(fig_w, fig_h))
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure(figsize=(fig_w, fig_h))
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure(figsize=(fig_w, fig_h))
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
Part A: Telegraph Process
###Code
#@title Video 1
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d0FHbuNf23k", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/d0FHbuNf23k
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 2A: Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation.
###Code
fig = plt.figure(figsize=(fig_w, fig_h))
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel')
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the this fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time.
###Code
fig = plt.figure(figsize=(fig_w, fig_h))
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
fig = plt.figure(figsize=(fig_w, fig_h))
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove solution
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
Part B: Distributional Perspective
###Code
#@title Video 2
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="B3_v8M44RfQ", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/B3_v8M44RfQ
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{c2o}} \\ \mu_{\text{o2c}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2B: Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
t = np.arange(0, T, dt)
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# x will be our array to keep track of x through time
x = x0
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
##
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
###################################################################
# x_kp1 = ...
# Stack this new state onto x to keep track of x through time steps
# x = ...
# Remove the line below when you are done
pass
# Uncomment this to plot the probabilities
# plot_state_probabilities(t, x)
# to_remove solution
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
t = np.arange(0, T, dt)
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# x will be our array to keep track of x through time
x = x0
for k in range(len(t)-1):
x_kp1 = np.dot(A, x[-1,:]) # remove later
# stack this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
print(x.shape, t.shape)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
(5000, 2) (5000,)
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
Probability of state c2o: 0.167
###Markdown
Part C: Equilibrium of the telegraph process
###Code
#@title Video 3
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="EQWXZ40_C-k", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/EQWXZ40_C-k
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Part B, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigen values:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
Eigen values: [1. 0.988]
Eigenvector 1: [0.98058068 0.19611614]
Eigenvector 2: [-0.70710678 0.70710678]
###Markdown
Exercise 2C: Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Part B of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove solution
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in part B?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Part B.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
[0.83333333 0.16666667]
[-1.06150861e+15 1.06150861e+15]
###Markdown
Neuromatch Academy 2020, Week 2, Day 2, Tutorial 2 Markov Processes**Content Creators**: Bing Wen Brunton, Ellie Stradquist**Content Reviewers**: Norma Kuhn, Karolina Stosio, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we will look at the dynamical systems introduced in the first tutorial through a different lens. In Tutorial 1, we studied dynamical systems as a deterministic process. For Tutorial 2, we will look at **probabilistic** dynamical systems. You may sometimes hear these systems called _stochastic_. In a probabilistic process, elements of randomness are involved. Every time you observe some probabilistic dynamical system, started from the same initial conditions, the outcome will likely be different. Put another way, dynamical systems that involve probability will incorporate random variations in their behavior. For some probabilistic dynamical systems, the differential equations express a relationship between $\dot{x}$ and $x$ at every time $t$, so that the direction of $x$ at _every_ time depends entirely on the value of $x$. Said a different way, knowledge of the value of the state variables $x$ at time t is _all_ the information needed to determine $\dot{x}$ and therefore $x$ at the next time.This property --- that the present state entirely determines the transition to the next state --- is what defines a **Markov process** and systems obeying this property can be described as **Markovian**.The goal of Tutorial 2 is to consider this type of Markov process in a simple example where the state transitions are probabilistic. In particular, we will:* Understand Markov processes and history dependence.* Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters. --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_switch_simulation(t, x):
fig = plt.figure()
plt.plot(t, x)
plt.title('State-switch simulation')
plt.xlabel('Time')
plt.xlim((0, 300)) # zoom in time
plt.ylabel('State of ion channel 0/1', labelpad=-60)
plt.yticks([0, 1], ['Closed (0)', 'Open (1)'])
plt.show()
return
def plot_interswitch_interval_histogram(inter_switch_intervals):
fig = plt.figure()
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
def plot_state_probabilities(time, states):
fig = plt.figure()
plt.plot(time, states[:,0], label='Closed to open')
plt.plot(time, states[:,1], label='Open to closed')
plt.legend()
plt.xlabel('time')
plt.ylabel('prob(open OR closed)')
###Output
_____no_output_____
###Markdown
--- Section 1: Telegraph Process
###Code
#@title Video 1: Markov Process
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xZO6GbU48ns", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Let's consider a Markov process with two states, where switches between each two states are probabilistic (known as a telegraph process). To be concrete, let's say we are modeling an **ion channel in a neuron that can be in one of two states: Closed (0) or Open (1)**. If the ion channel is Closed, it may transition to the Open state with probability $P(0 \rightarrow 1 | x = 0) = \mu_{c2o}$. Likewise, If the ion channel is Open, it transitions to Closed with probability $P(1 \rightarrow 0 | x=1) = \mu_{o2c}$.We simulate the process of changing states as a **Poisson process**. The Poisson process is a way to model discrete events where the average time between event occurrences is known but the exact time of some event is not known. Importantly, the Poisson process dictates the following points: 1. The probability of some event occurring is _independent from all other events_.2. The average rate of events within a given time period is constant.3. Two events cannot occur at the same moment. Our ion channel can either be in an open or closed state, but not both simultaneously. In the simulation below, we will use the Poisson process to model the state of our ion channel at all points $t$ within the total simulation time $T$. As we simulate the state change process, we also track at which times thoughout the simulation the state makes a switch. We can use those times to measure the distribution of the time _intervals_ between state switches. **Run the cell below** to show the state-change simulation process. Note that a random seed was set in the code block, so re-running the code will produce the same plot. Commenting out that line will produce a different simulation each run.
###Code
# @title State-change simulation process
# parameters
T = 5000 # total Time duration
dt = 0.001 # timestep of our simulation
# simulate state of our ion channel in time
# the two parameters that govern transitions are
# c2o: closed to open rate
# o2c: open to closed rate
def ion_channel_opening(c2o, o2c, T, dt):
# initialize variables
t = np.arange(0, T, dt)
x = np.zeros_like(t)
switch_times = []
# assume we always start in Closed state
x[0] = 0
# generate a bunch of random uniformly distributed numbers
# between zero and unity: [0, 1),
# one for each dt in our simulation.
# we will use these random numbers to model the
# closed/open transitions
myrand = np.random.random_sample(size=len(t))
# walk through time steps of the simulation
for k in range(len(t)-1):
# switching between closed/open states are
# Poisson processes
if x[k] == 0 and myrand[k] < c2o*dt: # remember to scale by dt!
x[k+1:] = 1
switch_times.append(k*dt)
elif x[k] == 1 and myrand[k] < o2c*dt:
x[k+1:] = 0
switch_times.append(k*dt)
return t, x, switch_times
c2o = 0.02
o2c = 0.1
np.random.seed(0) # set random seed
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
plot_switch_simulation(t,x)
###Output
_____no_output_____
###Markdown
Exercise 1 (2A): Computing intervals between switchesWe now have `switch_times`, which is a list consisting of times when the state switched. Using this, calculate the time intervals between each state switch and store these in a list called `inter_switch_intervals`.We will then plot the distribution of these intervals. How would you describe the shape of the distribution?
###Code
##############################################################################
## TODO: Insert your code here to calculate between-state-switch intervals,
## and uncomment the last line to plot the histogram
##############################################################################
# hint: see np.diff()
# inter_switch_intervals = ...
# plot_interswitch_interval_histogram(inter_switch_intervals)
# to_remove solution
# hint: see np.diff()
inter_switch_intervals = np.diff(switch_times)
# plot inter-switch intervals
with plt.xkcd():
plot_interswitch_interval_histogram(inter_switch_intervals)
###Output
_____no_output_____
###Markdown
We can also generate a bar graph to visualize the distribution of the number of time-steps spent in each of the two possible system states during the simulation. **Run the cell below** to visualize the distribution.
###Code
# @title Distribution of time spent in each state.
states = ['Closed', 'Open']
(unique, counts) = np.unique(x, return_counts=True)
plt.bar(states, counts)
plt.ylabel('Number of time steps')
plt.xlabel('State of ion channel');
###Output
_____no_output_____
###Markdown
<!-- Though the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict on what fraction of time it is Open as a function of the $\mu$ parameters. Before we continue exploring these distributions further, let's first take a look at the this fraction of Open states as a cumulative mean of the state $x$: -->Even though the state is _discrete_--the ion channel can only be either Closed or Open--we can still look at the **mean state** of the system, averaged over some window of time. Since we've coded Closed as $x=0$ and Open as $x=1$, conveniently, the mean of $x$ over some window of time has the interpretation of **fraction of time channel is Open**.Let's also take a look at the fraction of Open states as a cumulative mean of the state $x$. The cumulative mean tells us the average number of state-changes that the system will have undergone after a certain amount of time. **Run the cell below**.
###Code
# @title Cumulative mean of state
plt.plot(t, np.cumsum(x) / np.arange(1, len(t)+1))
plt.xlabel('time')
plt.ylabel('Cumulative mean of state');
###Output
_____no_output_____
###Markdown
Notice in the plot above that, although the channel started in the Closed ($x=0$) state, gradually adopted some mean value after some time. This mean value is related to the transition probabilities $\mu_{c2o}$and $\mu_{o2c}$. Interactive Demo: Varying transition probability values & TUsing the interactive demo below, explore the state-switch simulation for different transition probability values of states $\mu_{c2o}$ and $\mu_{o2c}$. Also, try different values for total simulation time length *T*. Does the general shape of the inter-switch interval distribution change or does it stay relatively the same? How does the bar graph of system states change based on these values?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_inter_switch_intervals(c2o = (0,1, .01), o2c = (0, 1, .01), T=(1000,10000, 1000)):
t, x, switch_times = ion_channel_opening(c2o, o2c, T, .1)
inter_switch_intervals = np.diff(switch_times)
#plot inter-switch intervals
plt.hist(inter_switch_intervals)
plt.title('Inter-switch Intervals Distribution')
plt.ylabel('Interval Count')
plt.xlabel('time')
plt.show()
plt.close()
# to_remove explanation
"""
Discussion:
(1) Does the general shape of the inter-switch interval distribution
change or does it stay relatively the same?
(2) How does the bar graph of system states change based on these values?
Answers:
(1) The shape of the distribution remains the same, but larger values of either
c2o or o2c shifts the distribution towards shorter intervals.
(2) If c2o is larger than o2c, then the channel tends to be open a larger
fraction of the time.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Distributional Perspective
###Code
#@title Video 2: State Transitions
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U6YRhLuRhHg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
We can run this simulation many times and gather empirical distributions of open/closed states. Alternatively, we can formulate the exact same system probabilistically, keeping track of the probability of being in each state.(see diagram in lecture)The same system of transitions can then be formulated using a vector of 2 elements as the state vector and a dynamics matrix $\mathbf{A}$. The result of this formulation is a *state transition matrix*:$\left[ \begin{array}{c} C \\ O \end{array} \right]_{k+1} = \mathbf{A} \left[ \begin{array}{c} C \\ O \end{array} \right]_k = \left[ \begin{array} & 1-\mu_{\text{c2o}} & \mu_{\text{o2c}} \\ \mu_{\text{c2o}} & 1-\mu_{\text{o2c}} \end{array} \right] \left[ \begin{array}{c} C \\ O \end{array} \right]_k$.Each transition probability shown in the matrix is as follows:1. $1-\mu_{\text{c2o}}$, the probability that the closed state remains closed. 2. $\mu_{\text{c2o}}$, the probability that the closed state transitions to the open state.3. $\mu_{\text{o2c}}$, the probability that the open state transitions to the closed state. 4. $1-\mu_{\text{o2c}}$, the probability that the open state remains open. _Notice_ that this system is written as a discrete step in time, and $\mathbf{A}$ describes the transition, mapping the state from step $k$ to step $k+1$. This is different from what we did in the exercises above where $\mathbf{A}$ had described the function from the state to the time derivative of the state. Exercise 2 (2B): Probability PropagationComplete the code below to simulate the propagation of probabilities of closed/open of the ion channel through time. A variable called `x_kp1` (short for, $x$ at timestep $k$ plus 1) should be calculated per each step *k* in the loop. However, you should plot $x$.
###Code
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
###################################################################
## TODO: Insert your code here to compute x_kp1 (x at k plus 1)
raise NotImplementedError("Student exercise: need to implement simulation")
## hint: use np.dot(a, b) function to compute the dot product
## of the transition matrix A and the last state in x
## hint 2: use np.vstack to append the latest state to x
###################################################################
# Compute the state of x at time k+1
x_kp1 = ...
# Stack (append) this new state onto x to keep track of x through time steps
x = ...
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
# x, t = simulate_prob_prop(A, x0, dt, T)
# plot_state_probabilities(t,x)
# to_remove solution
def simulate_prob_prop(A, x0, dt, T):
""" Simulate the propagation of probabilities given the transition matrix A,
with initial state x0, for a duration of T at timestep dt.
Args:
A (ndarray): state transition matrix
x0 (ndarray): state probabilities at time 0
dt (scalar): timestep of the simulation
T (scalar): total duration of the simulation
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# Initialize variables
t = np.arange(0, T, dt)
x = x0 # x at time t_0
# Step through the system in time
for k in range(len(t)-1):
# Compute the state of x at time k+1
x_kp1 = np.dot(A, x[-1,:])
# Stack (append) this new state onto x to keep track of x through time steps
x = np.vstack((x, x_kp1))
return x, t
# parameters
T = 500 # total Time duration
dt = 0.1 # timestep of our simulation
# same parameters as above
# c2o: closed to open rate
# o2c: open to closed rate
c2o = 0.02
o2c = 0.1
A = np.array([[1 - c2o*dt, o2c*dt],
[c2o*dt, 1 - o2c*dt]])
# initial condition: start as Closed
x0 = np.array([[1, 0]])
# Uncomment this to plot the probabilities
x, t = simulate_prob_prop(A, x0, dt, T)
with plt.xkcd():
plot_state_probabilities(t,x)
###Output
_____no_output_____
###Markdown
Here, we simulated the propagation of probabilities of the ion channel's state changing through time. Using this method is useful in that we can **run the simulation once** and see **how the probabilities propagate throughout time**, rather than re-running and empirically observing the telegraph simulation over and over again. Although the system started initially in the Closed ($x=0$) state, over time, it settles into a equilibrium distribution where we can predict what fraction of time it is Open as a function of the $\mu$ parameters. We can say that the plot above show this _relaxation towards equilibrium_.Re-calculating our value of the probability of $c2o$ again with this method, we see that this matches the simulation output from the telegraph process!
###Code
print("Probability of state c2o: %.3f"%(c2o / (c2o + o2c)))
x[-1,:]
###Output
_____no_output_____
###Markdown
--- Section 3: Equilibrium of the telegraph process
###Code
#@title Video 3: Continous vs. Discrete Time Formulation
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="csetTTauIh8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Since we have now modeled the propagation of probabilities by the transition matrix $\mathbf{A}$ in Section 2, let's connect the behavior of the system at equilibrium with the eigendecomposition of $\mathbf{A}$.As introduced in the lecture video, the eigenvalues of $\mathbf{A}$ tell us about the stability of the system, specifically in the directions of the corresponding eigenvectors.
###Code
# compute the eigendecomposition of A
lam, v = np.linalg.eig(A)
# print the 2 eigenvalues
print("Eigenvalues:",lam)
# print the 2 eigenvectors
eigenvector1 = v[:,0]
eigenvector2 = v[:,1]
print("Eigenvector 1:", eigenvector1)
print("Eigenvector 2:", eigenvector2)
###Output
_____no_output_____
###Markdown
Exercise 3 (2C): Finding a stable stateWhich of these eigenvalues corresponds to the **stable** (equilibrium) solution? What is the eigenvector of this eigenvalue? How does that explain the equilibrium solutions in simulation in Section 2 of this tutorial?_hint_: our simulation is written in terms of probabilities, so they must sum to 1. Therefore, you may also want to rescale the elements of the eigenvector such that they also sum to 1. These can then be directly compared with the probabilities of the states in the simulation.
###Code
###################################################################
## Insert your thoughts here
###################################################################
# to_remove explanation
"""
Discussion:
Which of the eigenvalues corresponds to the stable solution?
What is the eigenvector of this eigenvalue?
How does that explain the equilibrium solutions in Section 2?
Recommendation:
Ask the students to work in small groups (of 2 or 3) to discuss these questions.
Answers:
Whichever eigenvalue is 1 is the stable solution. There should be another
eigenvalue that is <1, which means it is decaying and goes away after the
transient period.
The eigenvector corresponding to this eigenvalue is the stable solution.
To see this, we need to normalize this eigenvector so that its 2 elements
sum to one, then we would see that the two numbers correspond to
[P(open), P(closed)] at equilibrium -- hopefully these are exactly the
equilibrium solutions observed in Section 2.
""";
# whichever eigenvalue is 1, the other one makes no sense
print(eigenvector1 / eigenvector1.sum())
print(eigenvector2 / eigenvector2.sum())
###Output
_____no_output_____ |
nb/1.hetionet_computation/3.compute_calibration_bins.ipynb | ###Markdown
Full network reconstruction
###Code
full_prior_root = pathlib.Path('../../data/task1/full_priors/')
full_prior_paths = sorted(full_prior_root.glob('*.tsv.gz'))
# full_prior_paths = [f'full_priors/{metaedge}.tsv.gz' for metaedge in ['AlD', 'G<rG']]
full_calibration_df = pd.DataFrame()
for prior_path in tqdm.tqdm_notebook(full_prior_paths):
metaedge = regex.search('(?<=full_priors/).+(?=.tsv.gz)', str(prior_path)).group()
print(metaedge, flush=True)
metaedge_calibration = pd.DataFrame()
# Compute calibration of XSwap prior
xswap_prior_df = pd.read_csv(prior_path, sep='\t', usecols=['edge', 'xswap_prior'])
xswap_cal_df = compute_single_feature_calibration(xswap_prior_df, 'xswap_prior', 100)
del xswap_prior_df
metaedge_calibration = pd.concat([metaedge_calibration, xswap_cal_df])
del xswap_cal_df
# print('Computed XSwap calibration')
# Compute calibration of scaled degree
scaled_degree_df = pd.read_csv(prior_path, sep='\t', usecols=['edge', 'source_degree', 'target_degree'])
degree_product = scaled_degree_df['source_degree'] * scaled_degree_df['target_degree']
del scaled_degree_df['source_degree'], scaled_degree_df['target_degree']
scaled_degree_df['scaled_degree'] = degree_product / degree_product.max()
del degree_product
scaled_degree_cal_df = compute_single_feature_calibration(scaled_degree_df, 'scaled_degree', 100)
del scaled_degree_df
metaedge_calibration = pd.concat([metaedge_calibration, scaled_degree_cal_df])
del scaled_degree_cal_df
# print('Computed scaled_degree calibration')
# Compute calibration of analytic prior
analytic_prior_df = (
pd.read_csv(prior_path, sep='\t', usecols=['edge', 'source_degree', 'target_degree'])
.assign(analytic_prior = lambda df: xswap.prior.approximate_xswap_prior(df['source_degree'],
df['target_degree'],
df['edge'].sum()))
.drop(['source_degree', 'target_degree'], axis=1)
)
analytic_prior_cal_df = compute_single_feature_calibration(analytic_prior_df, 'analytic_prior', 100)
del analytic_prior_df
metaedge_calibration = pd.concat([metaedge_calibration, analytic_prior_cal_df]).assign(metaedge=metaedge)
del analytic_prior_cal_df
# print('Computed analytic_prior calibration')
full_calibration_df = pd.concat([full_calibration_df, metaedge_calibration])
full_calibration_df.to_csv('../../data/task1/calibration/hetionet_calibration_bins.csv', index=False)
full_calibration_df.head()
(
ggplot(full_calibration_df, aes(x = 'feature_value', y = 'expected_frac', color = 'metaedge'))
+ geom_point()
+ geom_line()
+ geom_abline(color = 'grey', linetype = 'dashed')
+ facet_wrap('feature')
)
###Output
_____no_output_____
###Markdown
Sampled network to reconstruct unsampled
###Code
sampled_prior_root = pathlib.Path('../../data/task1/sampled_priors/')
sampled_prior_paths = sorted(sampled_prior_root.glob('*.tsv.gz'))
# sampled_prior_paths = [f'sampled_priors/{metaedge}.tsv.gz' for metaedge in ['AlD', 'G<rG']]
sampled_calibration_df = pd.DataFrame()
for prior_path in tqdm.tqdm_notebook(sampled_prior_paths):
metaedge = regex.search('(?<=sampled_priors/).+(?=.tsv.gz)', str(prior_path)).group()
print(metaedge, flush=True)
original_edges = pd.read_csv(f'../../data/task1/full_priors/{metaedge}.tsv.gz',
sep='\t', usecols=['edge'])['edge'].values
metaedge_calibration = pd.DataFrame()
# Compute calibration of XSwap prior (using ORIGINAL edges, not sampled)
xswap_prior_df = (
pd.read_csv(prior_path, sep='\t', usecols=['edge', 'xswap_prior'])
.assign(original_edge = original_edges)
.query('edge == False')
.drop('edge', axis=1)
.rename(columns={'original_edge': 'edge'})
)
xswap_cal_df = compute_single_feature_calibration(xswap_prior_df, 'xswap_prior', 100)
del xswap_prior_df
metaedge_calibration = pd.concat([metaedge_calibration, xswap_cal_df])
del xswap_cal_df
# Compute calibration of scaled degree
scaled_degree_df = (
pd.read_csv(prior_path, sep='\t', usecols=['edge', 'source_degree', 'target_degree'])
.assign(original_edge = original_edges)
.query('edge == False')
.drop('edge', axis=1)
.rename(columns={'original_edge': 'edge'})
)
degree_product = scaled_degree_df['source_degree'] * scaled_degree_df['target_degree']
del scaled_degree_df['source_degree'], scaled_degree_df['target_degree']
scaled_degree_df['scaled_degree'] = degree_product / degree_product.max()
del degree_product
scaled_degree_cal_df = compute_single_feature_calibration(scaled_degree_df, 'scaled_degree', 100)
del scaled_degree_df
metaedge_calibration = pd.concat([metaedge_calibration, scaled_degree_cal_df])
del scaled_degree_cal_df
# Compute calibration of analytic prior
analytic_prior_df = (
pd.read_csv(prior_path, sep='\t', usecols=['edge', 'source_degree', 'target_degree'])
.assign(analytic_prior = lambda df: xswap.prior.approximate_xswap_prior(df['source_degree'],
df['target_degree'],
df['edge'].sum()))
.drop(['source_degree', 'target_degree'], axis=1)
.assign(original_edge = original_edges)
.query('edge == False')
.drop('edge', axis=1)
.rename(columns={'original_edge': 'edge'})
)
analytic_prior_cal_df = compute_single_feature_calibration(analytic_prior_df, 'analytic_prior', 100)
del analytic_prior_df
metaedge_calibration = pd.concat([metaedge_calibration, analytic_prior_cal_df]).assign(metaedge=metaedge)
del analytic_prior_cal_df
sampled_calibration_df = pd.concat([sampled_calibration_df, metaedge_calibration])
sampled_calibration_df.to_csv('../../data/task1/calibration/hetionet_calibration_bins_sampled.csv',
index=False)
sampled_calibration_df.head()
(
ggplot(sampled_calibration_df, aes(x = 'feature_value', y = 'expected_frac', color = 'metaedge'))
+ geom_point()
+ geom_line()
+ geom_abline(color = 'grey', linetype = 'dashed')
+ facet_wrap('feature')
)
###Output
_____no_output_____ |
tools/imetad/MetaD Rates.ipynb | ###Markdown
basic analysis from https://doi.org/10.1021/acs.jpca.5b10667 (Jim and Kelly MetaD rates paper)
###Code
#libraries needed
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.legend_handler import HandlerLine2D
import matplotlib.lines as mlines
from scipy.optimize import curve_fit
from scipy.misc import factorial
from scipy.stats import ks_2samp
from scipy import stats
# the data frame is a 2 column list of numbers from one of my MD scripts (units = sec)
# column one is the 'accelerated' time (esacpe time in MD multipled by alpha)
datain=np.genfromtxt('fu.txt')
data=datain[:,1]*1e9
#rint np.size(data)
#data now in "ns for each escape event
min=np.min(data)
max=np.max(data)
bins=10*np.size(data)
#logscale of times
time=np.logspace(np.log10(min),np.log10(max),num=bins)
mu=np.mean(data)
#print time
time_centers = np.r_[0.5 * (time[:-1] + time[1:])]
#print time_centers
#this is because MATLAB works on the bin centers, numpy works on the bin edges
stats.kstest(data,'gamma',args=stats.gamma.fit(data))
def analyticalCDF(times,tau):
return 1-np.exp(-times/tau)
print np.std(data)
print stats.sem(data)
#Make histogram and CDF
hist, bins2=np.histogram(data,bins=time,density=False)
cdf=np.cumsum(hist)*1.0/data.size
#Fit the CDF
taufit, pcov = curve_fit(analyticalCDF,time_centers, cdf,mu)
print "mu (ns)\t\t" ,mu
print "taufit (ns)\t" ,taufit[0]
#lets make some plots
%matplotlib inline
fig = plt.figure(figsize=(6,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.2)
axes = fig.add_subplot(111)
axes.plot(bins2[1:bins],cdf,label='$CDF$')
axes.set_xscale('log')
axes.plot(time_centers,analyticalCDF(time_centers,taufit),label='$analytical\ CDF$')
first_legend = plt.legend(loc=0)
axes.set_xlabel('$log\ time\ (ns)$')
axes.set_ylabel('$P_{n\geq1}$')
plt.show()
#generate random data points from the analytical fit based on taufit
points=1
randdata=np.random.gamma(1,taufit,np.size(data)*points)
#perfrom the KS test to see if the data points from MetaD are statistically
#the same as the data points from the analytical fit
stat,p=ks_2samp(data,randdata)
#data table:
print "mu:" , np.mean(data)
print "mu_sem:", stats.sem(data)
print "sigma:", np.std(data,ddof=1)
print "t_m:", np.median(data)
print "tau:", taufit
print "mu_sigma_ratio:", np.mean(data)/np.std(data,ddof=1)
print "log2mu_median_ratio:", np.log(2)*np.mean(data)/np.median(data)
print "tau_mu_ratio:", taufit/np.mean(data)
print "p-value:" , p
print "ks-stat:" , stat
print "events recorded:" , np.size(data)
###Output
mu: 81.73272479
mu_sem: 4.40660863332
sigma: 124.637713866
t_m: 35.5623
tau: [ 57.59406874]
mu_sigma_ratio: 0.65576238728
log2mu_median_ratio: 1.59305803471
tau_mu_ratio: [ 0.70466351]
p-value: 0.000213515608967
ks-stat: 0.10625
events recorded: 800
###Markdown
Random sampling on data set - I think this will bootstrap the data and then do the KS analysis
###Code
##random sampling on data set
def sampling(data,num_iters,sampsize):
# if sampsize > 100
# sampsize = 100
means=np.array([0.0])
pvals=np.array([0.0])
points=1e4 #number of sampling points for p-val
alpha=0.05
reject=0.0
#for i in range((num_iters)):
while np.size(means) <= num_iters:
smalldata=np.random.choice(data,sampsize,replace=True)
#hist / CDF fit / etc
min=np.min(smalldata)
max=np.max(smalldata)
bins=10*np.size(smalldata)
time=np.logspace(np.log10(min),np.log10(max),num=bins)
mu=np.mean(smalldata)
time_centers = np.r_[0.5 * (time[:-1] + time[1:])]
hist, bins2=np.histogram(smalldata,bins=time,density=False)
cdf=np.cumsum(hist)*1.0/smalldata.size
taufit, pcov = curve_fit(analyticalCDF,time_centers, cdf,mu)
#analysis
randdata=np.random.gamma(1,taufit,np.size(data)*points)
stat,p=ks_2samp(smalldata,randdata)
if p > alpha:
means[means.size-1]=mu
pvals[pvals.size-1]=p
#debugprint p, mu
means.resize(means.size+1)
pvals.resize(pvals.size+1)
if p < alpha:
reject=reject+1
#this is just book keeping to remove the last 0 element
means=means[:(means.size-1)]
pvals=pvals[:(pvals.size-1)]
return means, pvals, reject
#run the sampling
#want to sample all rx*.txt , store in a dictionary (show me how to print the dictionary)
# Easiest way to to do what you want is this (assuming you'll always be doing it in an ipython notebook):
rx_filenames = !ls rx*.txt
rx_filenames = !ls rxdata_600K_100K.txt rxdata_525K_100K.txt rxdata_450_3.txt rxdata_375_1.txt rxdata_300_100K.txt NEW_525K_unbias.dat.dat NEW_600K_unbias.dat.dat NEW_900K_unbias.dat.dat NEW_1200K_unbias.dat.dat rxdata_300_200K.txt rxdata_300_100K.txt rxdata_300_20K.txt final_20K/rxdata_300.txt rxdata_300_5K.txt rxdata_300_1K.txt rxdata_300_.5K.txt rxdata_300_.05K.txt rxdata_300_.025K.txt rxdata_300_.005K.txt
rx_filenames = !ls rxdata_450_2.txt
# If you want to be able to do something similar in a .py file then use os.listdir():
#import os
#rx_filenames = [x for x in os.listdir('.') if x[-4]: == '.txt']
results = {} # Initialize your dictionary of results
for name in rx_filenames:
datain=np.genfromtxt(name)
data=datain[:,1]*1e9
niter=1000 # how many runs
size=.5 # how big of a sample size to take as a percentage of the total set
means,pvals,reject=sampling(data,niter,np.int(size*np.size(data)))
#results[name] = [np.mean(means),stats.sem(means),np.std(means),np.mean(pvals),reject,means,pvals]
results[name] = [np.mean(means),stats.sem(means),np.std(means),np.mean(pvals),reject]
# to get the results for a specific filename just use:
#results[filename]
# it will return a list where index 0 is mu, 1 is sem, 2 is avg-p
# to print the whole dictionary type
results
# this will only work if you don't have any other commands later in the cell that return output. If you want
# it to print before other stuff you'll have to use print (but it won't print as a table if you use "print" unless
# you loop through each item in results with your print statement
#analysis of sampling
%matplotlib inline
fig = plt.figure(figsize=(6,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=.2, hspace=0.2)
xr=np.arange(0,means.size)
axes = fig.add_subplot(211)
axes.plot(xr,means,label='sample means')
axes.set_xlabel('iter')
axes.set_ylabel('sample mean')
axes = fig.add_subplot(212)
axes.plot(xr,pvals,label='sample ps')
axes.set_xlabel('iter')
axes.set_ylabel('sample p')
plt.show()
###Output
_____no_output_____ |
IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__inlined_functions.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](pow): **`pow`**1. [Step 3](find_cp_cm): **`find_cp_cm`**1. [Step 4](compute_v02): **`compute_v02`**1. [Step 5](ppeos__c_code): **Polytropic Equations of State** 1. [Step 5.a](ppeos__c_code__prelim): *Preliminary treatment of the input* 1. [Step 5.a.i](ppeos__c_code__prelim__computing_ktab): Determining $\left\{K_{1},K_{2},\ldots,K_{\rm neos}\right\}$ 1. [Step 5.a.ii](ppeos__c_code__prelim__computing_eps_integ_consts): Determining $\left\{C_{0},C_{1},C_{2},\ldots,C_{\rm neos}\right\}$ 1. [Step 5.b](ppeos__c_code__eos_struct_setup) *Setting up the `eos_struct`* 1. [Step 5.c](ppeos__c_code__find_polytropic_k_and_gamma_index) *The `find_polytropic_K_and_Gamma_index()` function* 1. [Step 5.d](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold): *The new `compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold()` function* 1. [Step 5.d.i](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case1__rhob_equal_zero): Case 1: $\rho_{b} = 0$ 1. [Step 5.d.ii](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case2__single_polytropic_eos): Case 2: Polytropic EOSs 1. [Step 5.e](compute_p_cold__eps_cold): New function: `compute_P_cold__eps_cold()`1. [Step 6](lower_4vector_output_spatial_part): **`lower_4vector_output_spatial_part`**1. [Step 7](impose_speed_limit_output_u0): **`impose_speed_limit_output_u0`**1. [Step 8](enforce_pressure_floor_ceiling): **`enforce_pressure_floor_ceiling`**1. [Step 9](compute_smallba_b2_and_u_i_over_u0_psi4): **`compute_smallba_b2_and_u_i_over_u0_psi4`**1. [Step 11](code_validation): **Code validation**1. [Step 12](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: `pow` \[Back to [top](toc)\]$$\label{pow}$$This is an extremely simple function which simply checks whether or not we are trying to square a number before calling C's `pow()` function. This is because in C it is computationally quicker to do `x*x` than to use the function call `pow(x,2)`. Notice that we also use the "function" `SQR()`, which is declared in `IllinoisGRMHD_headers.h`, which is defined as```cdefine SQR(x) ( (x) * (x) )``` Step 3: `find_cp_cm` \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Overwriting ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: `compute_v02` \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5.e: `font_fix__rhob_loop` \[Back to [top](toc)\]$$\label{compute_p_cold__eps_cold}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: `lower_4vector_output_spatial_part` \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: `impose_speed_limit_output_u0` \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: `enforce_pressure_floor_ceiling` \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: `compute_smallba_b2_and_u_i_over_u0_psi4` \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 10: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
59c56
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c61
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
68a66,174
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
70,111c176
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
<
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
<
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
113,125c178,187
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c189,190
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
150a208,209
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
151a211,212
> #endif
>
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](pow): **`pow`**1. [Step 3](find_cp_cm): **`find_cp_cm`**1. [Step 4](compute_v02): **`compute_v02`**1. [Step 5](ppeos__c_code): **Polytropic Equations of State** 1. [Step 5.a](ppeos__c_code__prelim): *Preliminary treatment of the input* 1. [Step 5.a.i](ppeos__c_code__prelim__computing_ktab): Determining $\left\{K_{1},K_{2},\ldots,K_{\rm neos}\right\}$ 1. [Step 5.a.ii](ppeos__c_code__prelim__computing_eps_integ_consts): Determining $\left\{C_{0},C_{1},C_{2},\ldots,C_{\rm neos}\right\}$ 1. [Step 5.b](ppeos__c_code__eos_struct_setup) *Setting up the `eos_struct`* 1. [Step 5.c](ppeos__c_code__find_polytropic_k_and_gamma_index) *The `find_polytropic_K_and_Gamma_index()` function* 1. [Step 5.d](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold): *The new `compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold()` function* 1. [Step 5.d.i](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case1__rhob_equal_zero): Case 1: $\rho_{b} = 0$ 1. [Step 5.d.ii](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case2__single_polytropic_eos): Case 2: Polytropic EOSs 1. [Step 5.e](compute_p_cold__eps_cold): New function: `compute_P_cold__eps_cold()`1. [Step 6](lower_4vector_output_spatial_part): **`lower_4vector_output_spatial_part`**1. [Step 7](impose_speed_limit_output_u0): **`impose_speed_limit_output_u0`**1. [Step 8](enforce_pressure_floor_ceiling): **`enforce_pressure_floor_ceiling`**1. [Step 9](compute_smallba_b2_and_u_i_over_u0_psi4): **`compute_smallba_b2_and_u_i_over_u0_psi4`**1. [Step 11](code_validation): **Code validation**1. [Step 12](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: `pow` \[Back to [top](toc)\]$$\label{pow}$$This is an extremely simple function which simply checks whether or not we are trying to square a number before calling C's `pow()` function. This is because in C it is computationally quicker to do `x*x` than to use the function call `pow(x,2)`. Notice that we also use the "function" `SQR()`, which is declared in `IllinoisGRMHD_headers.h`, which is defined as```cdefine SQR(x) ( (x) * (x) )``` Step 3: `find_cp_cm` \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Writing ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: `compute_v02` \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5.e: `font_fix__rhob_loop` \[Back to [top](toc)\]$$\label{compute_p_cold__eps_cold}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: `lower_4vector_output_spatial_part` \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: `impose_speed_limit_output_u0` \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: `enforce_pressure_floor_ceiling` \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: `compute_smallba_b2_and_u_i_over_u0_psi4` \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 10: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
38a36
>
43a42
>
59c58,59
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
>
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c64,65
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
>
66a68
>
70,86c72,180
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
<
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
---
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
88,111c182
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
113,125c184,193
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c195,196
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
136a200
>
149a214
>
150a216,217
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
151a219,220
> #endif
>
165a235
>
176a247
>
197a269
>
212a285
>
234a308
>
252a327
>
265a341
>
283a360
>
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](find_cp_cm): **The `find_cp_cm()` function**1. [Step 3](compute_v02): **The `compute_v02()` function**1. [Step 4](font_fix__rhob_loop): **The `font_fix__rhob_loop()` function**1. [Step 5](lower_4vector_output_spatial_part): **The `lower_4vector_output_spatial_part()` function**1. [Step 6](impose_speed_limit_output_u0): **The `impose_speed_limit_output_u0()` function**1. [Step 7](enforce_pressure_floor_ceiling): **The `enforce_pressure_floor_ceiling()` function**1. [Step 8](compute_smallba_b2_and_u_i_over_u0_psi4): **The `compute_smallba_b2_and_u_i_over_u0_psi4()` function**1. [Step 9](code_validation): **Code validation**1. [Step 10](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: The `find_cp_cm()` function \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Writing ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 3: The `compute_v02()` function \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: The `font_fix__rhob_loop()` function \[Back to [top](toc)\]$$\label{font_fix__rhob_loop}$$This function implements the main loop inside the [font_fix__hybrid_EOS()](Tutorial-IllinoisGRMHD__the_conservative_to_primitive_algorithm.ipynbfont_fix_hybrid_eos) function (see [Font *et al.*](https://arxiv.org/pdf/gr-qc/9811015.pdf)).We now perform the following iterative process, which is described in detail in [Appendix A of Zachariah *et al.* (2012)](https://arxiv.org/pdf/1112.0568.pdf). We refer the reader to eqs. (A60), (A61), and (A62).1. Store the previously computed values of $W_{n}$, $S_{{\rm fluid},n}^{2}$, and $\rho_{n}$2. Compute $h = 1 + \epsilon_{\rm cold} + P_{\rm cold}/\rho_{n}$3. Set$$\boxed{\rho_{n+1} = \psi^{-6}\rho_{\star}\left(1 + \frac{S_{{\rm fluid},n}^{2}}{\rho_{\star} h_{n}}\right)^{-1/2}}$$4. For a given value of $n$, perform steps 1 (for $\rho$), 2 and 3 until $\left|\rho_{n+1}-\rho_{n}\right| < \rho_{n+1}\epsilon$, where $\epsilon$ is a user given tolerance5. After convergence is obtained, update:$$\boxed{\begin{align}h_{n+1} &= 1 + \epsilon_{\rm cold} + P_{\rm cold}/\rho_{n+1}\\W_{n+1} &= \psi^{-6}\sqrt{\tilde{S}^{2}_{{\rm fluid},n} + \rho_{\star}^{2} h_{n+1}^{2}}\\S_{{\rm fluid},n+1}^{2} &= \frac{W^{2}_{n+1}\left(\tilde{S}\cdot\tilde{S}\right) + \left(\bar{B}\cdot\tilde{S}\right)^{2}\left(\bar{B}^{2} + 2W_{n+1}\right)}{\left(W_{n+1} + \bar{B}^{2}\right)^{2}}\end{align}}\ .$$6. Repeat steps 1 through 5 until $\left|W_{n+1}-W_{n}\right| < W_{n+1}\epsilon$ *and* $\left|S^{2}_{{\rm fluid},n+1}-S^{2}_{{\rm fluid},n}\right| < S^{2}_{{\rm fluid},n+1}\epsilon$ *or* we reach the maximum number of iterations7. If font fix fails, increase the tolerance and try again.
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5: The `lower_4vector_output_spatial_part()` function \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: The `impose_speed_limit_output_u0()` function \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: The `enforce_pressure_floor_ceiling()` function \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: The `compute_smallba_b2_and_u_i_over_u0_psi4` function \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
10c7
< // one, which overestimates the max. speeds by a factor of ~2.
---
> // one, which overestimates the max. speeds by a factor of ~2.
15c12
< // kcm^2 = K_{\mu} K^{\mu},
---
> // kcm^2 = K_{\mu} K^{\mu},
38a36
>
43a42
>
49c48
<
---
>
59c58,59
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
>
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c64,65
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
>
66a68
>
70,84c72,180
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
---
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
86c182
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
88,111d183
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
113,125c185,194
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c196,197
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
136a201
>
149a215
>
150a217,218
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
151a220,221
> #endif
>
154c224
< // \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
---
> // \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
165a236
>
176a248
>
187,195c259,267
< // one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
< /* Proof of following line: */
< /* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
< /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
< /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
< /* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
< /* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
< /* = alphau0 - 1 */
< //alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
---
> // one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
> /* Proof of following line: */
> /* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
> /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
> /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
> /* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
> /* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
> /* = alphau0 - 1 */
> //alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
197a270
>
212a286
>
224c298
<
---
>
235c309,310
< static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
---
>
> static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
252a328
>
265a342
>
276c353
< CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
---
> CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
283a361
>
###Markdown
Step 10: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](pow): **`pow`**1. [Step 3](find_cp_cm): **`find_cp_cm`**1. [Step 4](compute_v02): **`compute_v02`**1. [Step 5](ppeos__c_code): **Polytropic Equations of State** 1. [Step 5.a](ppeos__c_code__prelim): *Preliminary treatment of the input* 1. [Step 5.a.i](ppeos__c_code__prelim__computing_ktab): Determining $\left\{K_{1},K_{2},\ldots,K_{\rm neos}\right\}$ 1. [Step 5.a.ii](ppeos__c_code__prelim__computing_eps_integ_consts): Determining $\left\{C_{0},C_{1},C_{2},\ldots,C_{\rm neos}\right\}$ 1. [Step 5.b](ppeos__c_code__eos_struct_setup) *Setting up the `eos_struct`* 1. [Step 5.c](ppeos__c_code__find_polytropic_k_and_gamma_index) *The `find_polytropic_K_and_Gamma_index()` function* 1. [Step 5.d](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold): *The new `compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold()` function* 1. [Step 5.d.i](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case1__rhob_equal_zero): Case 1: $\rho_{b} = 0$ 1. [Step 5.d.ii](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case2__single_polytropic_eos): Case 2: Polytropic EOSs 1. [Step 5.e](font_fix__rhob_loop): The `font_fix__rhob_loop()` function1. [Step 6](lower_4vector_output_spatial_part): **`lower_4vector_output_spatial_part`**1. [Step 7](impose_speed_limit_output_u0): **`impose_speed_limit_output_u0`**1. [Step 8](enforce_pressure_floor_ceiling): **`enforce_pressure_floor_ceiling`**1. [Step 9](compute_smallba_b2_and_u_i_over_u0_psi4): **`compute_smallba_b2_and_u_i_over_u0_psi4`**1. [Step 11](code_validation): **Code validation**1. [Step 12](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: `pow` \[Back to [top](toc)\]$$\label{pow}$$This is an extremely simple function which simply checks whether or not we are trying to square a number before calling C's `pow()` function. This is because in C it is computationally quicker to do `x*x` than to use the function call `pow(x,2)`. Notice that we also use the "function" `SQR()`, which is declared in `IllinoisGRMHD_headers.h`, which is defined as```cdefine SQR(x) ( (x) * (x) )``` Step 3: `find_cp_cm` \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Writing ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: `compute_v02` \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5.e: The `font_fix__rhob_loop()` function \[Back to [top](toc)\]$$\label{font_fix__rhob_loop}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: `lower_4vector_output_spatial_part` \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: `impose_speed_limit_output_u0` \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: `enforce_pressure_floor_ceiling` \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: `compute_smallba_b2_and_u_i_over_u0_psi4` \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 10: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
10c7
< // one, which overestimates the max. speeds by a factor of ~2.
---
> // one, which overestimates the max. speeds by a factor of ~2.
15c12
< // kcm^2 = K_{\mu} K^{\mu},
---
> // kcm^2 = K_{\mu} K^{\mu},
38a36
>
43a42
>
49c48
<
---
>
59c58,59
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
>
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c64,65
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
>
66a68
>
70,84c72,180
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
---
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
86c182
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
88,111d183
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
113,125c185,194
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c196,197
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
136a201
>
149a215
>
150a217,218
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
151a220,221
> #endif
>
154c224
< // \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
---
> // \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
165a236
>
176a248
>
187,195c259,267
< // one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
< /* Proof of following line: */
< /* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
< /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
< /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
< /* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
< /* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
< /* = alphau0 - 1 */
< //alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
---
> // one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
> /* Proof of following line: */
> /* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
> /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
> /* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
> /* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
> /* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
> /* = alphau0 - 1 */
> //alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
197a270
>
212a286
>
224c298
<
---
>
235c309,310
< static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
---
>
> static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
252a328
>
265a342
>
276c353
< CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
---
> CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
283a361
>
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](pow): **`pow`**1. [Step 3](find_cp_cm): **`find_cp_cm`**1. [Step 4](compute_v02): **`compute_v02`**1. [Step 5](ppeos__c_code): **Polytropic Equations of State** 1. [Step 5.a](ppeos__c_code__prelim): *Preliminary treatment of the input* 1. [Step 5.a.i](ppeos__c_code__prelim__computing_ktab): Determining $\left\{K_{1},K_{2},\ldots,K_{\rm neos}\right\}$ 1. [Step 5.a.ii](ppeos__c_code__prelim__computing_eps_integ_consts): Determining $\left\{C_{0},C_{1},C_{2},\ldots,C_{\rm neos}\right\}$ 1. [Step 5.b](ppeos__c_code__eos_struct_setup) *Setting up the `eos_struct`* 1. [Step 5.c](ppeos__c_code__find_polytropic_k_and_gamma_index) *The `find_polytropic_K_and_Gamma_index()` function* 1. [Step 5.d](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold): *The new `compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold()` function* 1. [Step 5.d.i](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case1__rhob_equal_zero): Case 1: $\rho_{b} = 0$ 1. [Step 5.d.ii](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case2__single_polytropic_eos): Case 2: Polytropic EOSs 1. [Step 5.e](compute_p_cold__eps_cold): New function: `compute_P_cold__eps_cold()`1. [Step 6](lower_4vector_output_spatial_part): **`lower_4vector_output_spatial_part`**1. [Step 7](impose_speed_limit_output_u0): **`impose_speed_limit_output_u0`**1. [Step 8](enforce_pressure_floor_ceiling): **`enforce_pressure_floor_ceiling`**1. [Step 9](compute_smallba_b2_and_u_i_over_u0_psi4): **`compute_smallba_b2_and_u_i_over_u0_psi4`**1. [Step 11](code_validation): **Code validation**1. [Step 12](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: `pow` \[Back to [top](toc)\]$$\label{pow}$$This is an extremely simple function which simply checks whether or not we are trying to square a number before calling C's `pow()` function. This is because in C it is computationally quicker to do `x*x` than to use the function call `pow(x,2)`. Notice that we also use the "function" `SQR()`, which is declared in `IllinoisGRMHD_headers.h`, which is defined as```cdefine SQR(x) ( (x) * (x) )``` Step 3: `find_cp_cm` \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Writing ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: `compute_v02` \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5.e: `font_fix__rhob_loop` \[Back to [top](toc)\]$$\label{compute_p_cold__eps_cold}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: `lower_4vector_output_spatial_part` \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: `impose_speed_limit_output_u0` \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: `enforce_pressure_floor_ceiling` \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: `compute_smallba_b2_and_u_i_over_u0_psi4` \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 10: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
38a36
>
43a42
>
59c58,59
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
>
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c64,65
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
>
66a68
>
70,86c72,180
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
<
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
---
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
88,111c182
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
113,125c184,193
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c195,196
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
136a200
>
149a214
>
150a216,217
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
151a219,220
> #endif
>
165a235
>
176a247
>
197a269
>
212a285
>
234a308
>
252a327
>
265a341
>
283a360
>
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: inlined_functions.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain a series of inline functions that are used by major functions within IllinoisGRMHD. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](pow): **`pow`**1. [Step 3](find_cp_cm): **`find_cp_cm`**1. [Step 4](compute_v02): **`compute_v02`**1. [Step 5](ppeos__c_code): **Polytropic Equations of State** 1. [Step 5.a](ppeos__c_code__prelim): *Preliminary treatment of the input* 1. [Step 5.a.i](ppeos__c_code__prelim__computing_ktab): Determining $\left\{K_{1},K_{2},\ldots,K_{\rm neos}\right\}$ 1. [Step 5.a.ii](ppeos__c_code__prelim__computing_eps_integ_consts): Determining $\left\{C_{0},C_{1},C_{2},\ldots,C_{\rm neos}\right\}$ 1. [Step 5.b](ppeos__c_code__eos_struct_setup) *Setting up the `eos_struct`* 1. [Step 5.c](ppeos__c_code__find_polytropic_k_and_gamma_index) *The `find_polytropic_K_and_Gamma_index()` function* 1. [Step 5.d](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold): *The new `compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold()` function* 1. [Step 5.d.i](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case1__rhob_equal_zero): Case 1: $\rho_{b} = 0$ 1. [Step 5.d.ii](ppeos__c_code__compute_P_cold__eps_cold__dPcold_drho__eps_th__h__Gamma_cold__case2__single_polytropic_eos): Case 2: Polytropic EOSs 1. [Step 5.e](compute_p_cold__eps_cold): New function: `compute_P_cold__eps_cold()`1. [Step 6](lower_4vector_output_spatial_part): **`lower_4vector_output_spatial_part`**1. [Step 7](impose_speed_limit_output_u0): **`impose_speed_limit_output_u0`**1. [Step 8](enforce_pressure_floor_ceiling): **`enforce_pressure_floor_ceiling`**1. [Step 9](compute_smallba_b2_and_u_i_over_u0_psi4): **`compute_smallba_b2_and_u_i_over_u0_psi4`**1. [Step 11](code_validation): **Code validation**1. [Step 12](latex_pdf_output): **Output this module to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__inlined_functions__C = os.path.join(IGM_src_dir_path,"inlined_functions.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$In this tutorial notebook we explain functions of `IllinoisGRMHD` which are called for various purposes. This means that this notebook does not have a specific "theme". We will cover functions whose purposes vary from a simple optimization when squaring numbers to computing minimum and maximum characteristic speeds at cell interfaces.We have tried our best to keep this tutorial module as independent from the others as possible. When new concepts appear, we offer useful references. The mathematical requirements of each function are also covered in great detailed. Step 2: `pow` \[Back to [top](toc)\]$$\label{pow}$$This is an extremely simple function which simply checks whether or not we are trying to square a number before calling C's `pow()` function. This is because in C it is computationally quicker to do `x*x` than to use the function call `pow(x,2)`. Notice that we also use the "function" `SQR()`, which is declared in `IllinoisGRMHD_headers.h`, which is defined as```cdefine SQR(x) ( (x) * (x) )``` Step 3: `find_cp_cm` \[Back to [top](toc)\]$$\label{find_cp_cm}$$We will now explain the inlined function `find_cp_cm`. Keep in mind that this function depend on the function `compute_v02`, [which is implemented below](compute_v02). This function is called with the objective of computing the minimum ($-$) and maximum ($+$) characteristic speeds at each cell interface, $c_{\pm}^{r,l}$.We approximate the general GRMHD dispersion relation (eq. 27 of [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf)) by the simpler expression$$\omega_{\rm cm}^{2} = \left[v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\right]k_{\rm cm}^{2}\ ,$$where $\omega_{\rm cm}=-k_{\mu}u^{\mu}$ is the frequency and $k_{\rm cm}^{2} = K_{\mu}K^{\mu}$ the wavenumber of an MHD wave mode in the frame comoving with the fluid, where $K_{\mu}$ is defined as the projection of the wave vector $k^{\nu}$ onto the direction normal to $u^{\nu}$: $K_{\mu} = \left(g_{\mu\nu}+u_{\mu}u_{\nu}\right)k^{\nu}$. $c_{\rm s}$ is the sound speed, and $v_{\rm A}$ is the Alfvén speed, given by$$v_{\rm A} = \sqrt{\frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$With these definitions, we may then solve the approximate dispersion relation above along direction $i$, noting that in the comoving frame $k_{\mu} = \left(-\omega,k_{j}\delta^{j}_{\ i}\right)$ and the wave (phase) velocity is $c_{\pm} = \left.\omega\middle/\left(k_{j}\delta^{j}_{\ i}\right)\right.$. The dispersion can then be written as a quadratic equation for $c_{\pm}$:$$ac_{\pm}^{2} + bc_{\pm} + c = 0\ ,$$with$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\ ,\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\ ,\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\ ,\\v_{0}^{2} &= v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)\ ,\\c_{\rm s} &= \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.\ ,\\c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$For the implementation of $v_{0}^{2}$, please see [Step 4 below](compute_v02).
###Code
%%writefile $outfile_path__inlined_functions__C
static inline void find_cp_cm(CCTK_REAL &cplus,CCTK_REAL &cminus,CCTK_REAL v02,CCTK_REAL u0,
CCTK_REAL vi,CCTK_REAL ONE_OVER_LAPSE_SQUARED,CCTK_REAL shifti,CCTK_REAL psim4,CCTK_REAL gupii) {
// This computes phase speeds in the direction given by flux_dirn.
// Note that we replace the full dispersion relation with a simpler
// one, which overestimates the max. speeds by a factor of ~2.
// See full discussion around Eqs. 49 and 50 in
// http://arxiv.org/pdf/astro-ph/0503420.pdf .
// What follows is a complete derivation of the quadratic we solve.
// wcm = (-k_0 u0 - k_x ux)
// kcm^2 = K_{\mu} K^{\mu},
// K_{\mu} K^{\mu} = (g_{\mu a} + u_{\mu} u_a) k^a * g^{\mu b} [ (g_{c b} + u_c u_b) k^c ]
// --> g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c = (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// = (g_{\mu a} + u_{\mu} u_a) k^a * (\delta^{\mu}_c + u_c u^{\mu} ) k^c
// =[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a
// =[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a
// =(g_{c a} + u_c u_a) k^c k^a
// = k_a k^a + u^c u^a k_c k_a
// k^a = g^{\mu a} k_{\mu} = g^{0 a} k_0 + g^{x a} k_x
// k_a k^a = k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x
// = g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2
// u^c u^a k_c k_a = (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2
// (k_0 u0)^2 + 2 k_x ux k_0 u0 + (k_x ux)^2 = v02 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2]
// (1-v02) (u^0 k_0 + u^x k_x)^2 = v02 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2)
// (1-v02) (u^0 k_0/k_x + u^x)^2 = v02 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx})
// (1-v02) (u^0 X + u^x)^2 = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// (1-v02) (u0^2 X^2 + 2 ux u0 X + ux^2) = v02 (g^{00} X^2 + 2 g^{x0} X + g^{xx})
// X^2 ( (1-v02) u0^2 - v02 g^{00}) + X (2 ux u0 (1-v02) - 2 v02 g^{x0}) + (1-v02) ux^2 - v02 g^{xx}
// a = (1-v02) u0^2 - v02 g^{00} = (1-v02) u0^2 + v02/lapse^2 <-- VERIFIED
// b = 2 ux u0 (1-v02) - 2 v02 shiftx/lapse^2 <-- VERIFIED, X->-X, because X = -w/k_1, and we are solving for -X.
// c = (1-v02) ux^2 - v02 (gupxx*psim4 - (shiftx/lapse)^2) <-- VERIFIED
// v02 = v_A^2 + c_s^2 (1 - v_A^2)
CCTK_REAL u0_SQUARED=SQR(u0);
###Output
Overwriting ../src/inlined_functions.C
###Markdown
We start by setting$$\boxed{\begin{align}a &= \left(1-v_{0}^{2}\right)\left(u^{0}\right)^{2} - v_{0}^{2}g^{00}\\b &= 2v_{0}^{2}g^{i0} - 2u^{i}u^{0}\left(1-v^{2}_{0}\right)\\c &= \left(1-v_{0}^{2}\right)\left(u^{i}\right)^{2} - v_{0}^{2}g^{ii}\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
//Find cplus, cminus:
CCTK_REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
CCTK_REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
CCTK_REAL c = u0_SQUARED*SQR(vi) * (1.0-v02) - v02 * ( psim4*gupii -
SQR(shifti)*ONE_OVER_LAPSE_SQUARED);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we find the minimum ($-$) and maximum ($+$) characteristic speeds$$\boxed{\begin{align}c_{+} &= \max\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ ,\\c_{-} &= \min\left(\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}\right)\ .\end{align}}$$
###Code
%%writefile -a $IGM_src_dir_path/inlined_functions.C
CCTK_REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
CCTK_REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 4: `compute_v02` \[Back to [top](toc)\]$$\label{compute_v02}$$This function is used to evaluate $v_{0}^{2}$, a quantity necessary for the computation of the minimum and maximum characteristic speeds at each cell interface, $c_{\pm}^{r,l}$. For more information on this procedure, please see the [implementation of the `find_cp_cm` function in Step 3](find_cp_cm).We start with the sound speed:$$\boxed{c_{\rm s} = \left.\left[\frac{dP_{\rm cold}}{d\rho_{b}} + \Gamma_{\rm th}\left(\Gamma_{\rm th}-1\right)\epsilon_{\rm th}\right]\middle/h\right.}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
if(U[RHOB]<=0) { v02L=1.0; return; }
/* c_s = sound speed = (dP_c/drho + \Gamma(\Gamma-1) \epsilon_th)/h */
CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Next we compute the square of the Alfén speed, $v_{\rm A}$, which is given by$$\boxed{v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* v_A = Alfven speed = sqrt( b^2/(rho0 h + b^2) ) */
CCTK_REAL v_A_squared = smallb[SMALLB2]/(smallb[SMALLB2] + U[RHOB]*(h));
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, $v_{0}$ is related to the sound speed and the Alfén speed via$$\boxed{v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right)}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
v02L = v_A_squared + c_s_squared*(1.0-v_A_squared);
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 5.e: `font_fix__rhob_loop` \[Back to [top](toc)\]$$\label{compute_p_cold__eps_cold}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/* Function : font_fix__rhob_loop()
* Authors : Leo Werneck
* Description : Determines rhob using the font fix prescription
* Dependencies: find_polytropic_K_and_Gamma_index()
* : compute_P_cold__eps_cold()
* Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
*
* Inputs : maxits - maximum number of iterations allowed
* : tol - font fix tolerance
* : W - See eq. (A26)
* : Sf2 - S_{fluid}^{2}, see eq. (A24)
* : Psim6 - This is equal to sqrt(\gamma)
* : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
* : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
* : B2bar - \bar{B}^{2}, see eq. (A28)
* : CONSERVS - Array of conservative variables
* : eos - Struct of EOS parameters
* : rhob_in - Initial value of rhob
* : rhob_out - Output variable
*
* Outputs : rhob_out - Updated value of rhob
* : return value: 0 - Font fix worked
* : return value: 1 - Font fix failed
*/
inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
CCTK_REAL *CONSERVS,
eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
/* Declare basic variables */
bool fontcheck=true;
int itcount = 0, j0, j1;
CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
//////////////////////
// OUTER LOOP START //
//////////////////////
while(fontcheck && itcount < maxits) {
/* Set variables to their input values */
itcount++;
W0 = W;
Sf20 = Sf2;
rhob1 = rhob_in;
/* Based on rhob_in (i.e. rhob1), determine the
* polytropic index j1
*/
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
//////////////////////
// INNER LOOP START //
//////////////////////
do {
/* Set rhob0/j0 to be equal to the rhob/j used
* in the previous iteration, i.e. rhob1/j1.
*/
rhob0 = rhob1;
j0 = j1;
/* Compute h using h_cold and our polytropic EOS
* .------------------------------------------.
* | h = h_cold = 1 + eps_cold + P_cold/rhob. |
* .------------------------------------------.
*/
compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob0;
/* Update rhob using eq. (A62) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
* .---------------------------------------------------------------------------.
*/
rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
/* Update j1 */
j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
} while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
//////////////////////
// INNER LOOP END //
//////////////////////
/* Output the last value of rhob */
rhob_out = rhob1;
/* Perform physical checks on the variables
* and output the last value of h obtained
*/
compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
h = 1.0 + eps_cold + P_cold/rhob_out;
/* Set W based on eq. (A60) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .-------------------------------------------------------.
* | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
* .-------------------------------------------------------.
*/
W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
/* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
* https://arxiv.org/pdf/1112.0568.pdf
* .---------------------------------------------------------------------------.
* | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
* .---------------------------------------------------------------------------.
*/
Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
}
//////////////////////
// OUTER LOOP END //
//////////////////////
/* If the code converged before the max
* number of iterations were exceeded,
* return 0, otherwise return 1.
*/
if(fontcheck || itcount >= maxits) {
return 1;
}
else {
return 0;
}
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 6: `lower_4vector_output_spatial_part` \[Back to [top](toc)\]$$\label{lower_4vector_output_spatial_part}$$This function is used to lower the indices of the spatial components of 4-vectors, $b^{\mu}$. Consider$$\begin{align}b_{i} &= g_{i\mu}b^{\mu} \\ &= g_{i0}b^{0} + g_{ij}b^{j} \\ &= \left(\gamma_{ij}\beta^{j}\right)b^{0} + \gamma_{ij}b^{j} \\ &= \gamma_{ij}\left(b^{j} + \beta^{j}b^{0}\right)\ ,\end{align}$$or, using the conformal metric and each component seperately$$\boxed{\begin{align}b_{x} &= \psi^{4}\left[\bar{\gamma}_{xx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{xy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{xz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{y} &= \psi^{4}\left[\bar{\gamma}_{yx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{yy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{yz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\\b_{z} &= \psi^{4}\left[\bar{\gamma}_{zx}\left(b^{x} + \beta^{x}b^{0}\right)+\bar{\gamma}_{zy}\left(b^{y} + \beta^{y}b^{0}\right)+\bar{\gamma}_{zz}\left(b^{z} + \beta^{z}b^{0}\right)\right]\end{align}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b_x = g_{\mu x} b^{\mu}
// = g_{t x} b^t + g_{i x} b^i
// = b^t gamma_{xj} beta^j + gamma_{ix} b^i
// = gamma_{xj} (b^j + beta^j b^t)
static inline void lower_4vector_output_spatial_part(CCTK_REAL psi4,CCTK_REAL *METRIC,CCTK_REAL *smallb, CCTK_REAL *smallb_lower) {
smallb_lower[SMALLBX] = psi4*( METRIC[GXX]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GXY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GXZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBY] = psi4*( METRIC[GXY]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYY]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GYZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
smallb_lower[SMALLBZ] = psi4*( METRIC[GXZ]*(smallb[SMALLBX]+smallb[SMALLBT]*METRIC[SHIFTX]) + METRIC[GYZ]*(smallb[SMALLBY]+smallb[SMALLBT]*METRIC[SHIFTY]) +
METRIC[GZZ]*(smallb[SMALLBZ]+smallb[SMALLBT]*METRIC[SHIFTZ]) );
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 7: `impose_speed_limit_output_u0` \[Back to [top](toc)\]$$\label{impose_speed_limit_output_u0}$$We now call upon the `impose_speed_limit_output_u0()` function inside the `inlined_functions.C` code file of `IllinoisGRMHD`. The basic algorithm performed by this function is summarized here. We start by evaluating the quantity$$\begin{align}{\rm one\_minus\_one\_over\_alpha\_u0\_squared} \equiv A &= \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)\\&= \frac{\gamma_{ij}}{\alpha^{2}}\left[\frac{\gamma^{ik}u_{k}}{u^{0}} - \beta^{i} + \beta^{i}\right]\left[\frac{\gamma^{j\ell}u_{\ell}}{u^{0}} - \beta^{j} + \beta^{j}\right]\\&=\frac{\gamma_{ij}u^{i}u^{j}}{\left(\alpha u^{0}\right)^{2}}\\&=\frac{\left(\alpha u^{0}\right)^{2}-1}{\left(\alpha u^{0}\right)^{2}}\\&=1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ \\\implies \boxed{A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}}\ ,\end{align}$$where when going from line 1 to 2 and from line 3 to 4 we have used eqs. (53) and (56) from [Duez *et al.*](https://arxiv.org/pdf/astro-ph/0503420.pdf), respectively. Keep in mind that the equation we are going to implement below is$$\boxed{{\rm one\_minus\_one\_over\_alpha\_u0\_squared} = \gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right)}\ ,$$but it is important to know that this equation also equals $A$ above.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void impose_speed_limit_output_u0(CCTK_REAL *METRIC,CCTK_REAL *U,CCTK_REAL psi4,CCTK_REAL ONE_OVER_LAPSE,output_stats &stats, CCTK_REAL &u0_out) {
DECLARE_CCTK_PARAMETERS;
// Derivation of first equation:
// \gamma_{ij} (v^i + \beta^i)(v^j + \beta^j)/(\alpha)^2
// = \gamma_{ij} 1/(u^0)^2 ( \gamma^{ik} u_k \gamma^{jl} u_l /(\alpha)^2 <- Using Eq. 53 of arXiv:astro-ph/0503420
// = 1/(u^0 \alpha)^2 u_j u_l \gamma^{jl} <- Since \gamma_{ij} \gamma^{ik} = \delta^k_j
// = 1/(u^0 \alpha)^2 ( (u^0 \alpha)^2 - 1 ) <- Using Eq. 56 of arXiv:astro-ph/0503420
// = 1 - 1/(u^0 \alpha)^2 <= 1
CCTK_REAL one_minus_one_over_alpha_u0_squared = psi4*(METRIC[GXX]* SQR(U[VX] + METRIC[SHIFTX]) +
2.0*METRIC[GXY]*(U[VX] + METRIC[SHIFTX])*(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GXZ]*(U[VX] + METRIC[SHIFTX])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GYY]* SQR(U[VY] + METRIC[SHIFTY]) +
2.0*METRIC[GYZ]*(U[VY] + METRIC[SHIFTY])*(U[VZ] + METRIC[SHIFTZ]) +
METRIC[GZZ]* SQR(U[VZ] + METRIC[SHIFTZ]) )*SQR(ONE_OVER_LAPSE);
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we construct the "speed limit quantity"$${\rm ONE\_MINUS\_ONE\_OVER\_GAMMA\_SPEED\_LIMIT\_SQUARED} \equiv B = 1-\frac{1}{\gamma^{2}_{\rm speed\ limit}}\ .$$If $A > B$, then we construct the correction factor $C\equiv A / B$, and adjust the velocities using$$\boxed{v^{i} \to \left(v^{i}+\beta^{i}\right)C - \beta^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
/*** Limit velocity to GAMMA_SPEED_LIMIT ***/
const CCTK_REAL ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED = 1.0-1.0/SQR(GAMMA_SPEED_LIMIT);
if(one_minus_one_over_alpha_u0_squared > ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED) {
CCTK_REAL correction_fac = sqrt(ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED/one_minus_one_over_alpha_u0_squared);
U[VX] = (U[VX] + METRIC[SHIFTX])*correction_fac-METRIC[SHIFTX];
U[VY] = (U[VY] + METRIC[SHIFTY])*correction_fac-METRIC[SHIFTY];
U[VZ] = (U[VZ] + METRIC[SHIFTZ])*correction_fac-METRIC[SHIFTZ];
one_minus_one_over_alpha_u0_squared=ONE_MINUS_ONE_OVER_GAMMA_SPEED_LIMIT_SQUARED;
stats.failure_checker+=1000;
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, since $A$ is evaluated using the first line above, namely$$\gamma_{ij}\left(\frac{v^{i}+\beta^{i}}{\alpha}\right)\left(\frac{v^{j}+\beta^{j}}{\alpha}\right) = A = 1 - \frac{1}{\left(\alpha u^{0}\right)^{2}}\ ,$$we can then compute $u_{0}$ by simply doing$$\boxed{u^{0} = \frac{1}{\alpha\sqrt{1-A}}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// A = 1.0-one_minus_one_over_alpha_u0_squared = 1-(1-1/(al u0)^2) = 1/(al u0)^2
// 1/sqrt(A) = al u0
//CCTK_REAL alpha_u0_minus_one = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared)-1.0;
//u0_out = (alpha_u0_minus_one + 1.0)*ONE_OVER_LAPSE;
CCTK_REAL alpha_u0 = 1.0/sqrt(1.0-one_minus_one_over_alpha_u0_squared);
if(std::isnan(alpha_u0*ONE_OVER_LAPSE)) printf("BAD FOUND NAN U0 CALC: %.15e %.15e %.15e | %.15e %.15e\n",alpha_u0,ONE_OVER_LAPSE,one_minus_one_over_alpha_u0_squared,psi4, U[VX]);
u0_out = alpha_u0*ONE_OVER_LAPSE;
}
// The two lines of code below are written to reduce roundoff error and were in the above function. I don't think they reduce error.
// one_over_alpha_u0 = sqrt(1.0-one_minus_one_over_alpha_u0_squared);
/* Proof of following line: */
/* [ 1-1/(alphau0)^2 ] / [ 1/(alphau0) (1 + 1/(alphau0)) ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ 1/(alphau0) + 1/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1)/((alphau0)^2) ] / [ (alphau0 + 1)/(alphau0)^2 ] */
/* = [ (alphau0)^2 - 1) ] / [ (alphau0 + 1) ] */
/* [ (alphau0 + 1) (alphau0 - 1) ] / [ (alphau0 + 1) ] */
/* = alphau0 - 1 */
//alpha_u0_minus_one = one_minus_one_over_alpha_u0_squared/one_over_alpha_u0/(1.0+one_over_alpha_u0);
//u0_out = (alpha_u0_minus_one+1.0)*ONE_OVER_LAPSE;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 8: `enforce_pressure_floor_ceiling` \[Back to [top](toc)\]$$\label{enforce_pressure_floor_ceiling}$$After the Newton-Raphson solver has successfully found a set of primitives, the primitives are checked for physicality, and if they are not in the physical range, they are minimally modified until they return to the physical range. First,if the velocity is found to be superluminal, the speed is reduced to `IllinoisGRMHD`’s default Lorentz factor limit, a procedure which we already explained above when we discussed the `impose_speed_limit_output_u0` function.Next, `IllinoisGRMHD` does not include any cooling mechanism, which means that for evolutions adopting a $\Gamma$-law equation of state, the pressure should not physically drop below $P_{\rm cold}$. So a pressure floor of $0.9P_{\rm cold}$ is imposed. Increasing this floor to $P_{\rm cold}$ exactly results in large central density drifts in TOV star evolutions.**NOTE**: Please keep in mind that the floor and ceiling values presented here were found ***empirically***.
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void enforce_pressure_floor_ceiling(output_stats &stats,CCTK_REAL kpoly,CCTK_REAL P_cold,CCTK_REAL Psi6,const CCTK_REAL Psi6threshold,CCTK_REAL rho_b,const CCTK_REAL rhobatm, CCTK_REAL &P) {
CCTK_REAL P_min=0.9*P_cold;
if(P<P_min) {
stats.failure_checker+=10;
P=P_min;
}
//MAX(P,P_min);
//if(P < P_min) P=1.0*P_cold;
/* OLD: Discarded because lower limit is unphysical.
if(P <= 0.5*kpoly*P_cold) {
P=0.5*kpoly*P_cold;
}
*/
###Output
Appending to ../src/inlined_functions.C
###Markdown
Simulations can crash in the other extreme, if $P/P_{\rm cold}$ becomes too large. This typically only happens in very low density regions or inside black holes. So at densities $\rho_{b}<100\rho_{\rm atm}$ or deep inside black hole horizons, a ceiling on $P$ of $100P_{\rm cold}$ is enforced (see Appendix A of [Etienne *et al.* (2012)](https://arxiv.org/abs/1112.0568) for more details).We also introduce a parameter, $\psi^{6}_{\rm threshold}$, which determines whether the region under consideration is deep inside the BH horizon or not. For regions deep inside the BH horizon, defined by $\sqrt{\gamma} = \psi^{6} > \psi^{6}_{\rm threshold}$, the primary goal is to keep the evolution stable and prevent inaccurate data from leaking out of the BH horizon. It was determined that in this situation, a better ceiling on $P$ is $10^{5}P_{\rm cold}$.
###Code
%%writefile -a $outfile_path__inlined_functions__C
//CCTK_REAL P_max = 10.0*P_cold;
CCTK_REAL P_max = 100.0*P_cold;
if(Psi6 > Psi6threshold) P_max = 1e5*P_cold; // <-- better than 10.
if((rho_b < 100.0*rhobatm || Psi6 > Psi6threshold) && P>P_max) {
P=P_max;
stats.failure_checker+=100;
}
/*
CCTK_REAL rho_horiz_cap = 1000.0*rhobatm;
//New density damping mechanism inside the horizon
if(Psi6 > Psi6threshold && rho_b>rho_horiz_cap) {
CCTK_REAL six_phi=log(Psi6);
CCTK_REAL six_phithreshold=log(Psi6threshold);
CCTK_REAL Psi6max_approx=350000;
rho_b = rho_horiz_cap+(rho_b-rho_horiz_cap)*exp(-200.0*SQR((six_phi-six_phithreshold)/log(Psi6max_approx)));
}
*/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 9: `compute_smallba_b2_and_u_i_over_u0_psi4` \[Back to [top](toc)\]$$\label{compute_smallba_b2_and_u_i_over_u0_psi4}$$In this inlined function we will compute quantities related to the magnetic field measured in the comoving fluid frame, $b^{\mu}$.We will need the following identities$$\begin{align}v^{i} &= \frac{u^{i}}{u^{0}}\ ,\\B^{0}_{(u)} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\B^{i}_{(u)} &= \frac{1}{u^{0}}\left(\frac{B^{i}}{\alpha} + u^{i}B^{0}_{(u)}\right)\ ,\\b^{\mu} &= \frac{B^{\mu}_{(u)}}{\sqrt{4\pi}}\ .\end{align}$$We start by setting the relation$$b^{0} = \frac{u_{i}B^{i}}{\alpha\sqrt{4\pi}} \implies \boxed{\alpha\sqrt{4\pi}b^{0} = u_{i}B^{i}}\ .$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
static inline void compute_smallba_b2_and_u_i_over_u0_psi4(CCTK_REAL *METRIC,CCTK_REAL *METRIC_LAP_PSI4,CCTK_REAL *U,CCTK_REAL u0L,CCTK_REAL ONE_OVER_LAPSE_SQRT_4PI,
CCTK_REAL &u_x_over_u0_psi4,CCTK_REAL &u_y_over_u0_psi4,CCTK_REAL &u_z_over_u0_psi4,CCTK_REAL *smallb) {
// NOW COMPUTE b^{\mu} and b^2 = b^{\mu} b^{\nu} g_{\mu \nu}
CCTK_REAL ONE_OVER_U0 = 1.0/u0L;
CCTK_REAL shiftx_plus_vx = (METRIC[SHIFTX]+U[VX]);
CCTK_REAL shifty_plus_vy = (METRIC[SHIFTY]+U[VY]);
CCTK_REAL shiftz_plus_vz = (METRIC[SHIFTZ]+U[VZ]);
// Eq. 56 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// u_i = gamma_{ij} u^0 (v^j + beta^j), gamma_{ij} is the physical metric, and gamma_{ij} = Psi4 * METRIC[Gij], since METRIC[Gij] is the conformal metric.
u_x_over_u0_psi4 = METRIC[GXX]*shiftx_plus_vx + METRIC[GXY]*shifty_plus_vy + METRIC[GXZ]*shiftz_plus_vz;
u_y_over_u0_psi4 = METRIC[GXY]*shiftx_plus_vx + METRIC[GYY]*shifty_plus_vy + METRIC[GYZ]*shiftz_plus_vz;
u_z_over_u0_psi4 = METRIC[GXZ]*shiftx_plus_vx + METRIC[GYZ]*shifty_plus_vy + METRIC[GZZ]*shiftz_plus_vz;
// Eqs. 23 and 31 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// Compute alpha sqrt(4 pi) b^t = u_i B^i
CCTK_REAL alpha_sqrt_4pi_bt = ( u_x_over_u0_psi4*U[BX_CENTER] + u_y_over_u0_psi4*U[BY_CENTER] + u_z_over_u0_psi4*U[BZ_CENTER] ) * METRIC_LAP_PSI4[PSI4]*u0L;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Then we compute$$\begin{align}b^{i} &= \frac{B^{i}_{(u)}}{\sqrt{4\pi}}\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + B^{0}_{(u)}u^{i}\right)\\ &= \frac{1}{u^{0}\sqrt{4\pi}}\left(\frac{B^{i}}{\alpha} + \sqrt{4\pi}b^{0}u^{i}\right)\\ &= \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}\frac{u^{i}}{u^{0}}\right)\\\implies &\boxed{b^{i} = \frac{1}{\alpha\sqrt{4\pi}}\left(\frac{B^{i}}{u^{0}} + \alpha\sqrt{4\pi}b^{0}v^{i}\right)}\ .\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// Eq. 24 in http://arxiv.org/pdf/astro-ph/0503420.pdf:
// b^i = B^i_u / sqrt(4 pi)
// b^i = ( B^i/alpha + B^0_u u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i/alpha + sqrt(4 pi) b^t u^i ) / ( u^0 sqrt(4 pi) )
// b^i = ( B^i + alpha sqrt(4 pi) b^t u^i ) / ( alpha u^0 sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t u^i/u^0 ) / ( alpha sqrt(4 pi) )
// b^i = ( B^i/u^0 + alpha sqrt(4 pi) b^t v^i ) / ( alpha sqrt(4 pi) )
smallb[SMALLBX] = (U[BX_CENTER]*ONE_OVER_U0 + U[VX]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBY] = (U[BY_CENTER]*ONE_OVER_U0 + U[VY]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
smallb[SMALLBZ] = (U[BZ_CENTER]*ONE_OVER_U0 + U[VZ]*alpha_sqrt_4pi_bt)*ONE_OVER_LAPSE_SQRT_4PI;
// Eq. 23 in http://arxiv.org/pdf/astro-ph/0503420.pdf, with alpha sqrt (4 pi) b^2 = u_i B^i already computed above
smallb[SMALLBT] = alpha_sqrt_4pi_bt * ONE_OVER_LAPSE_SQRT_4PI;
###Output
Appending to ../src/inlined_functions.C
###Markdown
Finally, we compute$$\begin{align}b^{2} &= g_{\mu\nu}b^{\mu}b^{\nu}\\ &= g_{00}\left(b^{0}\right)^{2} + g_{ij}b^{i}b^{j} + 2g_{0i}b^{0}b^{i}\\ &= \left(-\alpha^{2} + \gamma_{ij}\beta^{i}\beta^{j}\right)\left(b^{0}\right)^{2} + \gamma_{ij}b^{i}b^{j} + 2b^{0}\gamma_{ij}\beta^{j}b^{i}\\ &= -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left[b^{i}b^{j} + 2b^{0}b^{i}\beta^{j} + \left(b^{0}\right)^{2}\beta^{i}\beta^{j}\right]\\\implies &\boxed{b^{2} = -\left(\alpha b^{0}\right)^{2} + \gamma_{ij}\left(b^{i} + b^{0}\beta^{i}\right)\left(b^{j} + b^{0}\beta^{j}\right)}\end{align}$$
###Code
%%writefile -a $outfile_path__inlined_functions__C
// b^2 = g_{\mu \nu} b^{\mu} b^{\nu}
// = gtt bt^2 + gxx bx^2 + gyy by^2 + gzz bz^2 + 2 (gtx bt bx + gty bt by + gtz bt bz + gxy bx by + gxz bx bz + gyz by bz)
// = (-al^2 + gamma_{ij} betai betaj) bt^2 + b^i b^j gamma_{ij} + 2 g_{t i} b^t b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t g_{t i} b^i
// = - (alpha b^t)^2 + (b^t)^2 gamma_{ij} beta^i beta^j + b^i b^j gamma_{ij} + 2 b^t (gamma_{ij} beta^j) b^i
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + b^i b^j + 2 b^t beta^j b^i)
// = - (alpha b^t)^2 + gamma_{ij} ((b^t)^2 beta^i beta^j + 2 b^t beta^j b^i + b^i b^j)
// = - (alpha b^t)^2 + gamma_{ij} (b^i + b^t beta^i) (b^j + b^t beta^j)
CCTK_REAL bx_plus_shiftx_bt = smallb[SMALLBX]+METRIC[SHIFTX]*smallb[SMALLBT];
CCTK_REAL by_plus_shifty_bt = smallb[SMALLBY]+METRIC[SHIFTY]*smallb[SMALLBT];
CCTK_REAL bz_plus_shiftz_bt = smallb[SMALLBZ]+METRIC[SHIFTZ]*smallb[SMALLBT];
smallb[SMALLB2] = -SQR(METRIC_LAP_PSI4[LAPSE]*smallb[SMALLBT]) +
( METRIC[GXX]*SQR(bx_plus_shiftx_bt) + METRIC[GYY]*SQR(by_plus_shifty_bt) + METRIC[GZZ]*SQR(bz_plus_shiftz_bt) +
2.0*( METRIC[GXY]*(bx_plus_shiftx_bt)*(by_plus_shifty_bt) +
METRIC[GXZ]*(bx_plus_shiftx_bt)*(bz_plus_shiftz_bt) +
METRIC[GYZ]*(by_plus_shifty_bt)*(bz_plus_shiftz_bt) ) ) * METRIC_LAP_PSI4[PSI4]; // mult by psi4 because METRIC[GIJ] is the conformal metric.
/***********************************************************/
}
###Output
Appending to ../src/inlined_functions.C
###Markdown
Step 10: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/inlined_functions.C"
original_IGM_file_name = "inlined_functions-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read()
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read()
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__inlined_functions__C = !diff $original_IGM_file_path $outfile_path__inlined_functions__C
if Validation__inlined_functions__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for inlined_functions.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for inlined_functions.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__inlined_functions__C:
print(diff_line)
###Output
Validation test for inlined_functions.C: FAILED!
Diff:
1,4c1
< static inline CCTK_REAL fasterpow_ppm_reconstruct(CCTK_REAL inputvar,CCTK_REAL inputpow) {
< if(inputpow==2.0) return SQR(inputvar);
< return pow(inputvar,inputpow);
< }
---
>
59c56
< static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
---
> static inline void compute_v02(CCTK_REAL dPcold_drho,CCTK_REAL Gamma_th,CCTK_REAL eps_th,CCTK_REAL h,CCTK_REAL *smallb,CCTK_REAL *U, CCTK_REAL &v02L) {
64c61
< CCTK_REAL c_s_squared = (dPcold_drho + gamma_th*(gamma_th-1.0)*eps_th)/(h);
---
> CCTK_REAL c_s_squared = (dPcold_drho + Gamma_th*(Gamma_th-1.0)*eps_th)/(h);
68a66,174
> /* Function : font_fix__rhob_loop()
> * Authors : Leo Werneck
> * Description : Determines rhob using the font fix prescription
> * Dependencies: find_polytropic_K_and_Gamma_index()
> * : compute_P_cold__eps_cold()
> * Reference : Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
> *
> * Inputs : maxits - maximum number of iterations allowed
> * : tol - font fix tolerance
> * : W - See eq. (A26)
> * : Sf2 - S_{fluid}^{2}, see eq. (A24)
> * : Psim6 - This is equal to sqrt(\gamma)
> * : sdots - \tilde{S}_{\mu}\tilde{S}^{\mu}
> * : BbardotS2 - (\bar{B}^{\mu}S_{\mu})^{2},
> * : B2bar - \bar{B}^{2}, see eq. (A28)
> * : CONSERVS - Array of conservative variables
> * : eos - Struct of EOS parameters
> * : rhob_in - Initial value of rhob
> * : rhob_out - Output variable
> *
> * Outputs : rhob_out - Updated value of rhob
> * : return value: 0 - Font fix worked
> * : return value: 1 - Font fix failed
> */
> inline int font_fix__rhob_loop( int maxits, CCTK_REAL tol,
> CCTK_REAL W, CCTK_REAL Sf2, CCTK_REAL Psim6, CCTK_REAL sdots, CCTK_REAL BbardotS2, CCTK_REAL B2bar,
> CCTK_REAL *CONSERVS,
> eos_struct eos, CCTK_REAL rhob_in, CCTK_REAL &rhob_out ) {
>
> /* Declare basic variables */
> bool fontcheck=true;
> int itcount = 0, j0, j1;
> CCTK_REAL W0, Sf20, rhob0, rhob1, h, P_cold, eps_cold;
>
> //////////////////////
> // OUTER LOOP START //
> //////////////////////
> while(fontcheck && itcount < maxits) {
>
> /* Set variables to their input values */
> itcount++;
> W0 = W;
> Sf20 = Sf2;
> rhob1 = rhob_in;
>
> /* Based on rhob_in (i.e. rhob1), determine the
> * polytropic index j1
> */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> //////////////////////
> // INNER LOOP START //
> //////////////////////
> do {
>
> /* Set rhob0/j0 to be equal to the rhob/j used
> * in the previous iteration, i.e. rhob1/j1.
> */
> rhob0 = rhob1;
> j0 = j1;
>
> /* Compute h using h_cold and our polytropic EOS
> * .------------------------------------------.
> * | h = h_cold = 1 + eps_cold + P_cold/rhob. |
> * .------------------------------------------.
> */
> compute_P_cold__eps_cold(eos,rhob0, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob0;
>
> /* Update rhob using eq. (A62) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | rhob = rho_star * Psi^{-6} / sqrt( 1 + S_fluid^{2}/( (rho_star*h)^{2} ) ) |
> * .---------------------------------------------------------------------------.
> */
> rhob1 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]*h));
>
> /* Update j1 */
> j1 = find_polytropic_K_and_Gamma_index(eos,rhob1);
>
> } while( fabs(rhob1-rhob0) > rhob1*tol || j1 != j0);
> //////////////////////
> // INNER LOOP END //
> //////////////////////
>
> /* Output the last value of rhob */
> rhob_out = rhob1;
>
> /* Perform physical checks on the variables
> * and output the last value of h obtained
> */
> compute_P_cold__eps_cold(eos,rhob_out, P_cold, eps_cold);
> h = 1.0 + eps_cold + P_cold/rhob_out;
>
> /* Set W based on eq. (A60) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .-------------------------------------------------------.
> * | W = psi^{-6} * sqrt( S_fluid^{2} + (rho_star*h)^{2} ) |
> * .-------------------------------------------------------.
> */
> W = sqrt( Sf20 + SQR(CONSERVS[RHOSTAR]*h))*Psim6;
>
> /* Then update S_{fluid}^{2} using eq. (A61) in Etienne et al. (2011)
> * https://arxiv.org/pdf/1112.0568.pdf
> * .---------------------------------------------------------------------------.
> * | S_fluid^{2} = ( W^{2}*S^{2} + (B.S)^2*(B^{2} + 2W) )/( ( W + B^{2} )^{2} )|
> * .---------------------------------------------------------------------------.
> */
> Sf2 = (SQR(W)*sdots + BbardotS2*(B2bar + 2.0*W))/SQR(W+B2bar);
70,111c176
< static inline void compute_P_cold__eps_cold__dPcold_drho__eps_th__h__gamma_cold(CCTK_REAL *U, eos_struct &eos,
< CCTK_REAL &P_cold,CCTK_REAL &eps_cold,CCTK_REAL &dPcold_drho,CCTK_REAL &eps_th,CCTK_REAL &h,
< CCTK_REAL &gamma_cold) {
< // This code handles equations of state of the form defined
< // in Eqs 13-16 in http://arxiv.org/pdf/0802.0200.pdf
<
< if(U[RHOB]==0) {
< P_cold = 0.0;
< eps_cold = 0.0;
< dPcold_drho = 0.0;
< eps_th = 0.0;
< h = 0.0;
< gamma_cold = eos.gamma_tab[0];
< return;
< }
<
< CCTK_REAL U_RHOB_inv = 1.0/U[RHOB];
<
< if(eos.neos==1) {
< // Eq. 14 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{cold} = K_i rho_i^{\Gamma_i}
< P_cold = eos.k_tab[0]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[0]);
< // Eq. 16 of http://arxiv.org/pdf/0802.0200.pdf :
< // \epsilon_{cold} = \int ( P_{cold}(rho) / rho^2 ) drho
< // = \int ( K_0 \rho^{\Gamma_0 - 2} ) drho
< // = ( K_0 \rho^{\Gamma_0 - 1} ) / (\Gamma_0 - 1)
< // = ( P_{cold} / rho ) / (\Gamma_0 - 1)
< eps_cold = P_cold*U_RHOB_inv/(eos.gamma_tab[0]-1.0);
< // dPcold/drho = K_i \Gamma_i rho_i^{\Gamma_i-1} = \Gamma_i P_{cold} / rho
< dPcold_drho = eos.gamma_tab[0]*P_cold*U_RHOB_inv;
< // Eq. 15 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th},
< // Eq. 13 of http://arxiv.org/pdf/0802.0200.pdf :
< // P_{th} = P - P_{cold}
< // -> P - P_{cold} = (\Gamma_{th} - 1) \rho_0 \epsilon_{th}
< // -> \epsilon_{th} = ( P - P_{cold} ) / [ (\Gamma_{th} - 1) \rho_0 ]
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< // Just below Eq. 16 in http://arxiv.org/pdf/astro-ph/0503420.pdf :
< // h = 1 + \epsilon + P/rho
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[0];
< return;
---
> if ( fabs(W-W0) < W*tol && fabs(Sf20-Sf2) < Sf2*tol) fontcheck=false;
113,125c178,187
<
< // See comments above for the eos.neos==1 case for relevant
< // equations & references; the extension to arbitrary "nn"
< // is straightforward.
< for(int nn=1;nn<eos.neos;nn++) {
< if (U[RHOB] <= eos.rho_tab[nn] && U[RHOB] > eos.rho_tab[nn-1]) {
< P_cold = eos.k_tab[nn]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[nn]);
< eps_cold = eos.eps_tab[nn-1] + (P_cold*U_RHOB_inv - eos.P_tab[nn-1]/eos.rho_tab[nn-1])/(eos.gamma_tab[nn]-1.0);
< dPcold_drho = eos.gamma_tab[nn]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[nn];
< }
---
> //////////////////////
> // OUTER LOOP END //
> //////////////////////
>
> /* If the code converged before the max
> * number of iterations were exceeded,
> * return 0, otherwise return 1.
> */
> if(fontcheck || itcount >= maxits) {
> return 1;
127,133c189,190
< if (U[RHOB] > eos.rho_tab[eos.neos-1]) {
< P_cold = eos.k_tab[eos.neos]*fasterpow_ppm_reconstruct(U[RHOB],eos.gamma_tab[eos.neos]);
< eps_cold = eos.eps_tab[eos.neos-1] + (P_cold*U_RHOB_inv - eos.P_tab[eos.neos-1]/eos.rho_tab[eos.neos-1])/(eos.gamma_tab[eos.neos]-1.0);
< dPcold_drho = eos.gamma_tab[eos.neos]*P_cold*U_RHOB_inv;
< eps_th = (U[PRESSURE] - P_cold)/(eos.gamma_th-1.0)*U_RHOB_inv;
< h = 1.0 + eps_cold + eps_th + U[PRESSURE]*U_RHOB_inv;
< gamma_cold = eos.gamma_tab[eos.neos];
---
> else {
> return 0;
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__inlined_functions.pdf](Tutorial-IllinoisGRMHD__inlined_functions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path Tutorial-IllinoisGRMHD__inlined_functions.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__inlined_functions.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____ |
Algorithms/Graph Algorithms/Sum of dependencies in a graph.ipynb | ###Markdown
Sum of Dependencies in a graph- Given a directed and connected graph with n nodes.- If there is an edge from u to v then u depends on v.Find the sum of dependencies for every node. Example- A depends on C and D- B depends on D- C depends on D- D depends on none- So answer is 2 + 1 + 1 = 4
###Code
class Graph():
def __init__(self, size):
self.data = []
for i in xrange(size):
self.data.append([])
def addEdge(self, u, v):
self.data[u].append(v)
def sum_of_dependencies(self):
count = 0
for i in self.data:
count += len(i)
return count
g = Graph(4)
print g.data
g.addEdge(0, 2)
g.addEdge(0, 3)
g.addEdge(1, 3)
g.addEdge(2, 2)
print g.data
print g.sum_of_dependencies()
###Output
4
|
PANDAS/DATAFRAMES/Plots in a dataframe.ipynb | ###Markdown
Plots in a dataframe Plot one column versus the other
###Code
df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
ax = df.plot(x='lab', y='val', rot=0)
###Output
_____no_output_____
###Markdown
Hist on a certain column
###Code
DF.hist(column='Number of bounces')
###Output
_____no_output_____
###Markdown
* Now, if we want to modify the axes, we need to access the list returned by the `hist` method
###Code
axList=DF.hist(column='Number of bounces')
axList[0][0].set_xlim((0,5))
###Output
_____no_output_____
###Markdown
* Now, if we want to modify the figure size
###Code
DF.hist(column='Number of bounces',figsize=(5,5))
###Output
_____no_output_____
###Markdown
Hists based on the categories of a certain categorical variable
###Code
#creating the dataframe
x = ['A']*300 + ['B']*400 + ['C']*300
y = np.random.randn(1000)
df = pd.DataFrame({'Letter':x, 'N':y})
#plotting
df['N'].hist(by=df['Letter'])
###Output
_____no_output_____
###Markdown
* Now, if we want to modify the axes, we need to access the list returned by the `hist` method
###Code
axList=df['N'].hist(by=df['Letter'])
axList[0][0].set_xlim((0,5))
axList[0][1].set_xlim((0,5))
###Output
_____no_output_____
###Markdown
Multiple ( or more than) Hist (Histograms)
###Code
import matplotlib.pyplot as plt
x1 = np.random.randn(1000)
x2 = np.random.randn(1000)
colors = ['#E69F00', '#56B4E9']
names = ['var1', 'var2']
plt.hist([x1, x2], bins = 10, normed=True, color = colors, label=names)
plt.legend()
plt.xlabel('2 variables')
plt.ylabel('Normalized Freq')
plt.title('Side-by-Side Histogram with Random data')
###Output
_____no_output_____
###Markdown
Now, the same but with a stacked histogram:
###Code
plt.hist([x1, x2], bins = int(10), normed=True, color = colors, label=names,stacked=True)
###Output
_____no_output_____
###Markdown
Creating a bar plot
###Code
df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
ax = df.plot.bar(x='lab', y='val', rot=0)
###Output
_____no_output_____
###Markdown
* Now, we can plot several numerical series grouped by categories
###Code
speed = [0.1, 17.5, 40, 48, 52, 69, 88]
lifespan = [2, 8, 70, 1.5, 25, 12, 28]
index = ['snail', 'pig', 'elephant',
'rabbit', 'giraffe', 'coyote', 'horse']
df = pd.DataFrame({'speed': speed,
'lifespan': lifespan}, index=index)
ax = df.plot.bar(rot=0)
###Output
_____no_output_____
###Markdown
The figure can be split by column with subplots=True.
###Code
axes = df.plot.bar(rot=0, subplots=True)
axes[1].legend(loc=2)
###Output
_____no_output_____
###Markdown
Creating barplot with seaborn* Here we show how to create a countplot on a certain categorical variable
###Code
import seaborn as sns
sns.set(style="darkgrid")
titanic = sns.load_dataset("titanic")
titanic.head(5)
ax = sns.countplot(x="class", data=titanic)
###Output
_____no_output_____
###Markdown
* Countplot for two categorical variables
###Code
ax = sns.countplot(x="class", hue="who", data=titanic)
###Output
_____no_output_____
###Markdown
Now, let's create the barplot with the percentages
###Code
titanic_counts = (titanic.groupby(['class'])['who']
.value_counts(normalize=True)
.rename('percentage')
.mul(100)
.reset_index()
.sort_values('who'))
p = sns.barplot(x="who", y="percentage", hue="class", data=titanic_counts)
###Output
_____no_output_____
###Markdown
Creating scatter plots to see the correlation between numerical variables
###Code
df.plot(kind="scatter", x="speed", y="lifespan", alpha=0.1)
###Output
_____no_output_____
###Markdown
* Now, let's create a scatter plot matrix: from pandas.plotting import scatter_matrixattributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]scatter_matrix(housing[attributes], figsize=(12, 8))save_fig("scatter_matrix_plot") Creating a boxplot
###Code
import numpy as np
np.random.seed(1234)
df = pd.DataFrame(np.random.randn(10,4),
columns=['Col1', 'Col2', 'Col3', 'Col4'])
boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
###Output
_____no_output_____
###Markdown
Creating a boxplot with seaborn * DIfferent boxplots for each column in the DF
###Code
import numpy as np; np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame(data = np.random.random(size=(4,4)), columns = ['A','B','C','D'])
sns.boxplot(x="variable", y="value", data=pd.melt(df))
plt.show()
###Output
_____no_output_____ |
tutorials/streamlit_notebooks/NER_PT.ipynb | ###Markdown
[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portuguese text** 1. Colab Setup
###Code
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
# !bash colab.sh
# -p is for pyspark
# -s is for spark-nlp
# !bash colab.sh -p 3.1.1 -s 3.0.1
# by default they are set to the latest
# Install Spark NLP Display for visualization
!pip install --ignore-installed spark-nlp-display
###Output
openjdk version "11.0.10" 2021-01-19
OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.18.04)
OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.18.04, mixed mode, sharing)
--2021-04-05 10:00:48-- http://setup.johnsnowlabs.com/colab.sh
Resolving setup.johnsnowlabs.com (setup.johnsnowlabs.com)... 51.158.130.26
Connecting to setup.johnsnowlabs.com (setup.johnsnowlabs.com)|51.158.130.26|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/scripts/colab_setup.sh [following]
--2021-04-05 10:00:48-- https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/scripts/colab_setup.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1593 (1.6K) [text/plain]
Saving to: ‘STDOUT’
- 100%[===================>] 1.56K --.-KB/s in 0s
2021-04-05 10:00:48 (32.6 MB/s) - written to stdout [1593/1593]
setup Colab for PySpark 3.1.1 and Spark NLP 3.0.1
[K |████████████████████████████████| 212.3MB 75kB/s
[K |████████████████████████████████| 153kB 55.2MB/s
[K |████████████████████████████████| 204kB 22.9MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
bash: colab.sh: No such file or directory
Collecting spark-nlp-display
[?25l Downloading https://files.pythonhosted.org/packages/cf/6a/e822cccbbc480e7140128836fda52bf56d131adc4f7f83ee1dd71afe7797/spark_nlp_display-1.5-py3-none-any.whl (94kB)
[K |████████████████████████████████| 102kB 7.1MB/s
[?25hCollecting spark-nlp
Using cached https://files.pythonhosted.org/packages/e5/31/6e0f5cff049aa1f5b9bf06754001d9986211b45ca9165938adc8bed2fdf6/spark_nlp-3.0.1-py2.py3-none-any.whl
Collecting numpy
[?25l Downloading https://files.pythonhosted.org/packages/73/ef/8967d406f3f85018ceb5efab50431e901683188f1741ceb053efcab26c87/numpy-1.20.2-cp37-cp37m-manylinux2010_x86_64.whl (15.3MB)
[K |████████████████████████████████| 15.3MB 329kB/s
[?25hCollecting ipython
[?25l Downloading https://files.pythonhosted.org/packages/c9/b1/82cbe2b856386f44f37fdae54d9b425813bd86fe33385c9d658d64826098/ipython-7.22.0-py3-none-any.whl (785kB)
[K |████████████████████████████████| 788kB 50.5MB/s
[?25hCollecting svgwrite==1.4
[?25l Downloading https://files.pythonhosted.org/packages/1c/85/1dc25b36c3ac4f3fe285d33065fc0f2ea7bdfb9209d6369e01a3e8ef6252/svgwrite-1.4-py3-none-any.whl (66kB)
[K |████████████████████████████████| 71kB 8.0MB/s
[?25hCollecting pandas
[?25l Downloading https://files.pythonhosted.org/packages/f3/d4/3fe3b5bf9886912b64ef040040aec356fa48825e5a829a84c2667afdf952/pandas-1.2.3-cp37-cp37m-manylinux1_x86_64.whl (9.9MB)
[K |████████████████████████████████| 9.9MB 43.2MB/s
[?25hCollecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
[?25l Downloading https://files.pythonhosted.org/packages/eb/e6/4b4ca4fa94462d4560ba2f4e62e62108ab07be2e16a92e594e43b12d3300/prompt_toolkit-3.0.18-py3-none-any.whl (367kB)
[K |████████████████████████████████| 368kB 44.6MB/s
[?25hCollecting pygments
[?25l Downloading https://files.pythonhosted.org/packages/3a/80/a52c0a7c5939737c6dca75a831e89658ecb6f590fb7752ac777d221937b9/Pygments-2.8.1-py3-none-any.whl (983kB)
[K |████████████████████████████████| 993kB 42.0MB/s
[?25hCollecting jedi>=0.16
[?25l Downloading https://files.pythonhosted.org/packages/f9/36/7aa67ae2663025b49e8426ead0bad983fee1b73f472536e9790655da0277/jedi-0.18.0-py2.py3-none-any.whl (1.4MB)
[K |████████████████████████████████| 1.4MB 45.6MB/s
[?25hCollecting setuptools>=18.5
[?25l Downloading https://files.pythonhosted.org/packages/9e/d4/b99a960314121a003e9f39c61dfde01a1010bb47661e193a7722f7f32d52/setuptools-54.2.0-py3-none-any.whl (785kB)
[K |████████████████████████████████| 788kB 41.5MB/s
[?25hCollecting pickleshare
Downloading https://files.pythonhosted.org/packages/9a/41/220f49aaea88bc6fa6cba8d05ecf24676326156c23b991e80b3f2fc24c77/pickleshare-0.7.5-py2.py3-none-any.whl
Collecting backcall
Downloading https://files.pythonhosted.org/packages/4c/1c/ff6546b6c12603d8dd1070aa3c3d273ad4c07f5771689a7b69a550e8c951/backcall-0.2.0-py2.py3-none-any.whl
Collecting pexpect>4.3; sys_platform != "win32"
[?25l Downloading https://files.pythonhosted.org/packages/39/7b/88dbb785881c28a102619d46423cb853b46dbccc70d3ac362d99773a78ce/pexpect-4.8.0-py2.py3-none-any.whl (59kB)
[K |████████████████████████████████| 61kB 5.5MB/s
[?25hCollecting traitlets>=4.2
[?25l Downloading https://files.pythonhosted.org/packages/f6/7d/3ecb0ebd0ce8dcdfa7bd47ab85c1d4a521e6770ef283d0824f5804994dfe/traitlets-5.0.5-py3-none-any.whl (100kB)
[K |████████████████████████████████| 102kB 11.8MB/s
[?25hCollecting decorator
Downloading https://files.pythonhosted.org/packages/3e/c4/80311bb66f2a772e9e9d76c54933d0fdbf3202ad194c6282b4c8687ddb32/decorator-5.0.5-py3-none-any.whl
Collecting pytz>=2017.3
[?25l Downloading https://files.pythonhosted.org/packages/70/94/784178ca5dd892a98f113cdd923372024dc04b8d40abe77ca76b5fb90ca6/pytz-2021.1-py2.py3-none-any.whl (510kB)
[K |████████████████████████████████| 512kB 45.7MB/s
[?25hCollecting python-dateutil>=2.7.3
[?25l Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
[K |████████████████████████████████| 235kB 48.8MB/s
[?25hCollecting wcwidth
Downloading https://files.pythonhosted.org/packages/59/7c/e39aca596badaf1b78e8f547c807b04dae603a433d3e7a7e04d67f2ef3e5/wcwidth-0.2.5-py2.py3-none-any.whl
Collecting parso<0.9.0,>=0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/a9/c4/d5476373088c120ffed82f34c74b266ccae31a68d665b837354d4d8dc8be/parso-0.8.2-py2.py3-none-any.whl (94kB)
[K |████████████████████████████████| 102kB 11.1MB/s
[?25hCollecting ptyprocess>=0.5
Downloading https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl
Collecting ipython-genutils
Downloading https://files.pythonhosted.org/packages/fa/bc/9bd3b5c2b4774d5f33b2d544f1460be9df7df2fe42f352135381c347c69a/ipython_genutils-0.2.0-py2.py3-none-any.whl
Collecting six>=1.5
Downloading https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl
[31mERROR: tensorflow 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.20.2 which is incompatible.[0m
[31mERROR: nbclient 0.5.3 has requirement jupyter-client>=6.1.5, but you'll have jupyter-client 5.3.5 which is incompatible.[0m
[31mERROR: moviepy 0.2.3.5 has requirement decorator<5.0,>=4.0.2, but you'll have decorator 5.0.5 which is incompatible.[0m
[31mERROR: jupyter-console 5.2.0 has requirement prompt-toolkit<2.0.0,>=1.0.0, but you'll have prompt-toolkit 3.0.18 which is incompatible.[0m
[31mERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 7.22.0 which is incompatible.[0m
[31mERROR: google-colab 1.0.0 has requirement pandas~=1.1.0; python_version >= "3.0", but you'll have pandas 1.2.3 which is incompatible.[0m
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.[0m
Installing collected packages: spark-nlp, numpy, wcwidth, prompt-toolkit, pygments, parso, jedi, setuptools, pickleshare, backcall, ptyprocess, pexpect, ipython-genutils, traitlets, decorator, ipython, svgwrite, pytz, six, python-dateutil, pandas, spark-nlp-display
Successfully installed backcall-0.2.0 decorator-5.0.5 ipython-7.22.0 ipython-genutils-0.2.0 jedi-0.18.0 numpy-1.20.2 pandas-1.2.3 parso-0.8.2 pexpect-4.8.0 pickleshare-0.7.5 prompt-toolkit-3.0.18 ptyprocess-0.7.0 pygments-2.8.1 python-dateutil-2.8.1 pytz-2021.1 setuptools-54.2.0 six-1.15.0 spark-nlp-3.0.1 spark-nlp-display-1.5 svgwrite-1.4 traitlets-5.0.5 wcwidth-0.2.5
###Markdown
2. Start the Spark session
###Code
import json
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300, wikiner_6B_300, wikiner_6B_100
MODEL_NAME = "wikiner_840B_300"
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nascido em 28 de outubro de 1955) é um magnata americano de negócios, desenvolvedor de software, investidor e filantropo. Ele é mais conhecido como co-fundador da Microsoft Corporation. Durante sua carreira na Microsoft, Gates ocupou os cargos de presidente, diretor executivo (CEO), presidente e diretor de arquitetura de software, além de ser o maior acionista individual até maio de 2014. Ele é um dos empreendedores e pioneiros mais conhecidos da revolução dos microcomputadores nas décadas de 1970 e 1980. Nascido e criado em Seattle, Washington, Gates co-fundou a Microsoft com o amigo de infância Paul Allen em 1975, em Albuquerque, Novo México; tornou-se a maior empresa de software de computador pessoal do mundo. Gates liderou a empresa como presidente e CEO até deixar o cargo em janeiro de 2000, mas ele permaneceu como presidente e tornou-se arquiteto-chefe de software. No final dos anos 90, Gates foi criticado por suas táticas de negócios, que foram consideradas anticompetitivas. Esta opinião foi confirmada por várias decisões judiciais. Em junho de 2006, Gates anunciou que iria passar para um cargo de meio período na Microsoft e trabalhar em período integral na Fundação Bill & Melinda Gates, a fundação de caridade privada que ele e sua esposa, Melinda Gates, estabeleceram em 2000. [ 9] Ele gradualmente transferiu seus deveres para Ray Ozzie e Craig Mundie. Ele deixou o cargo de presidente da Microsoft em fevereiro de 2014 e assumiu um novo cargo como consultor de tecnologia para apoiar a recém-nomeada CEO Satya Nadella.""",
"""A Mona Lisa é uma pintura a óleo do século XVI, criada por Leonardo. É realizada no Louvre, em Paris."""
]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
###Output
glove_840B_300 download started this may take some time.
Approximate size to download 2.3 GB
[OK!]
wikiner_840B_300 download started this may take some time.
Approximate size to download 14.5 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
from sparknlp_display import NerVisualizer
NerVisualizer().display(
result = result.collect()[0],
label_col = 'ner_chunk',
document_col = 'document'
)
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portuguese text** 1. Colab Setup
###Code
# Install PySpark and Spark NLP
! pip install -q pyspark==3.1.2 spark-nlp
# Install Spark NLP Display lib
! pip install --upgrade -q spark-nlp-display
###Output
_____no_output_____
###Markdown
2. Start the Spark session
###Code
import json
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300, wikiner_6B_300, wikiner_6B_100
MODEL_NAME = "wikiner_840B_300"
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nascido em 28 de outubro de 1955) é um magnata americano de negócios, desenvolvedor de software, investidor e filantropo. Ele é mais conhecido como co-fundador da Microsoft Corporation. Durante sua carreira na Microsoft, Gates ocupou os cargos de presidente, diretor executivo (CEO), presidente e diretor de arquitetura de software, além de ser o maior acionista individual até maio de 2014. Ele é um dos empreendedores e pioneiros mais conhecidos da revolução dos microcomputadores nas décadas de 1970 e 1980. Nascido e criado em Seattle, Washington, Gates co-fundou a Microsoft com o amigo de infância Paul Allen em 1975, em Albuquerque, Novo México; tornou-se a maior empresa de software de computador pessoal do mundo. Gates liderou a empresa como presidente e CEO até deixar o cargo em janeiro de 2000, mas ele permaneceu como presidente e tornou-se arquiteto-chefe de software. No final dos anos 90, Gates foi criticado por suas táticas de negócios, que foram consideradas anticompetitivas. Esta opinião foi confirmada por várias decisões judiciais. Em junho de 2006, Gates anunciou que iria passar para um cargo de meio período na Microsoft e trabalhar em período integral na Fundação Bill & Melinda Gates, a fundação de caridade privada que ele e sua esposa, Melinda Gates, estabeleceram em 2000. [ 9] Ele gradualmente transferiu seus deveres para Ray Ozzie e Craig Mundie. Ele deixou o cargo de presidente da Microsoft em fevereiro de 2014 e assumiu um novo cargo como consultor de tecnologia para apoiar a recém-nomeada CEO Satya Nadella.""",
"""A Mona Lisa é uma pintura a óleo do século XVI, criada por Leonardo. É realizada no Louvre, em Paris."""
]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
###Output
glove_840B_300 download started this may take some time.
Approximate size to download 2.3 GB
[OK!]
wikiner_840B_300 download started this may take some time.
Approximate size to download 14.5 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
from sparknlp_display import NerVisualizer
NerVisualizer().display(
result = result.collect()[0],
label_col = 'ner_chunk',
document_col = 'document'
)
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portugese language** 0. Colab Setup
###Code
!sudo apt-get install openjdk-8-jdk
!java -version
!pip install --ignore-installed -q pyspark==2.4.4
!pip install spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
1. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
2. Select the DL model and re-run cells below. Dictionary containing embedding mapping for each NER model has been defined.
###Code
### Select the model and re-run all the cells below ####
MODEL_NAME='wikiner_6B_300'
model_dict={'wikiner_840B_300': 'glove_840B_300',
'wikiner_6B_100' : 'glove_100d',
'wikiner_6B_300': 'glove_6B_300'
}
###Output
_____no_output_____
###Markdown
3. Some sample examples
###Code
## Generating Example Files ##
text_list = ["""William Henry Gates III (nacido el 28 de octubre de 1955) es un magnate de los negocios, desarrollador de software, inversor y filántropo estadounidense. Es mejor conocido como el cofundador de Microsoft Corporation. Durante su carrera en Microsoft, Gates ocupó los cargos de presidente, director ejecutivo (CEO), presidente y arquitecto de software en jefe, y también fue el mayor accionista individual hasta mayo de 2014. Es uno de los empresarios y pioneros más conocidos de revolución de la microcomputadora de los años setenta y ochenta. Nacido y criado en Seattle, Washington, Gates cofundó Microsoft con su amigo de la infancia Paul Allen en 1975, en Albuquerque, Nuevo México; se convirtió en la compañía de software de computadora personal más grande del mundo. Gates dirigió la compañía como presidente y CEO hasta que dejó el cargo de CEO en enero de 2000, pero siguió siendo presidente y se convirtió en el arquitecto jefe de software. A fines de la década de 1990, Gates había sido criticado por sus tácticas comerciales, que se han considerado anticompetitivas. Esta opinión ha sido confirmada por numerosas sentencias judiciales. En junio de 2006, Gates anunció que haría la transición a un puesto de medio tiempo en Microsoft y trabajaría a tiempo completo en la Fundación Bill y Melinda Gates, la fundación caritativa privada que él y su esposa, Melinda Gates, establecieron en 2000. [ 9] Poco a poco transfirió sus deberes a Ray Ozzie y Craig Mundie. Renunció como presidente de Microsoft en febrero de 2014 y asumió un nuevo cargo como asesor tecnológico para apoyar al recién nombrado CEO Satya Nadella.""",
"""La Mona Lisa es una pintura al óleo del siglo XVI creada por Leonardo. Se celebra en el Louvre de París."""
]
###Output
_____no_output_____
###Markdown
4. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
lang = 'xx'
if model_dict[MODEL_NAME] == 'glove_100d':
lang='en'
embeddings = WordEmbeddingsModel.pretrained(model_dict[MODEL_NAME], lang=lang).\
setInputCols(["document", 'token']).\
setOutputCol("embeddings")
public_ner = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("ner_chunk")
nlpPipeline = Pipeline(stages=[ documentAssembler,
tokenizer,
embeddings,
public_ner,
ner_converter
])
###Output
_____no_output_____
###Markdown
5. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
6. Visualize Results
###Code
result.select(F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')).alias("cols")) \
.select(F.expr("cols['0']").alias("chunk"),
F.expr("cols['1']['entity']").alias("ner_label")).show(truncate=False)
result = result.toPandas()
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portuguese text** 1. Colab Setup
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
###Output
_____no_output_____
###Markdown
2. Start the Spark session
###Code
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300, wikiner_6B_300, wikiner_6B_100
MODEL_NAME = "wikiner_840B_300"
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nascido em 28 de outubro de 1955) é um magnata americano de negócios, desenvolvedor de software, investidor e filantropo. Ele é mais conhecido como co-fundador da Microsoft Corporation. Durante sua carreira na Microsoft, Gates ocupou os cargos de presidente, diretor executivo (CEO), presidente e diretor de arquitetura de software, além de ser o maior acionista individual até maio de 2014. Ele é um dos empreendedores e pioneiros mais conhecidos da revolução dos microcomputadores nas décadas de 1970 e 1980. Nascido e criado em Seattle, Washington, Gates co-fundou a Microsoft com o amigo de infância Paul Allen em 1975, em Albuquerque, Novo México; tornou-se a maior empresa de software de computador pessoal do mundo. Gates liderou a empresa como presidente e CEO até deixar o cargo em janeiro de 2000, mas ele permaneceu como presidente e tornou-se arquiteto-chefe de software. No final dos anos 90, Gates foi criticado por suas táticas de negócios, que foram consideradas anticompetitivas. Esta opinião foi confirmada por várias decisões judiciais. Em junho de 2006, Gates anunciou que iria passar para um cargo de meio período na Microsoft e trabalhar em período integral na Fundação Bill & Melinda Gates, a fundação de caridade privada que ele e sua esposa, Melinda Gates, estabeleceram em 2000. [ 9] Ele gradualmente transferiu seus deveres para Ray Ozzie e Craig Mundie. Ele deixou o cargo de presidente da Microsoft em fevereiro de 2014 e assumiu um novo cargo como consultor de tecnologia para apoiar a recém-nomeada CEO Satya Nadella.""",
"""A Mona Lisa é uma pintura a óleo do século XVI, criada por Leonardo. É realizada no Louvre, em Paris."""
]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
###Output
_____no_output_____
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(
F.explode(
F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')
).alias("cols")
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']['entity']").alias('ner_label')
).show(truncate=False)
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portuguese text** 1. Colab Setup
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 56kB/s
[K |████████████████████████████████| 204kB 42.5MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp
[?25l Downloading https://files.pythonhosted.org/packages/b5/a2/5c2e18a65784442ded6f6c58af175ca4d99649337de569fac55b04d7ed8e/spark_nlp-2.5.5-py2.py3-none-any.whl (124kB)
[K |████████████████████████████████| 133kB 2.7MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.5.5
###Markdown
2. Start the Spark session
###Code
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
# If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300, wikiner_6B_300, wikiner_6B_100
MODEL_NAME = "wikiner_840B_300"
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nascido em 28 de outubro de 1955) é um magnata americano de negócios, desenvolvedor de software, investidor e filantropo. Ele é mais conhecido como co-fundador da Microsoft Corporation. Durante sua carreira na Microsoft, Gates ocupou os cargos de presidente, diretor executivo (CEO), presidente e diretor de arquitetura de software, além de ser o maior acionista individual até maio de 2014. Ele é um dos empreendedores e pioneiros mais conhecidos da revolução dos microcomputadores nas décadas de 1970 e 1980. Nascido e criado em Seattle, Washington, Gates co-fundou a Microsoft com o amigo de infância Paul Allen em 1975, em Albuquerque, Novo México; tornou-se a maior empresa de software de computador pessoal do mundo. Gates liderou a empresa como presidente e CEO até deixar o cargo em janeiro de 2000, mas ele permaneceu como presidente e tornou-se arquiteto-chefe de software. No final dos anos 90, Gates foi criticado por suas táticas de negócios, que foram consideradas anticompetitivas. Esta opinião foi confirmada por várias decisões judiciais. Em junho de 2006, Gates anunciou que iria passar para um cargo de meio período na Microsoft e trabalhar em período integral na Fundação Bill & Melinda Gates, a fundação de caridade privada que ele e sua esposa, Melinda Gates, estabeleceram em 2000. [ 9] Ele gradualmente transferiu seus deveres para Ray Ozzie e Craig Mundie. Ele deixou o cargo de presidente da Microsoft em fevereiro de 2014 e assumiu um novo cargo como consultor de tecnologia para apoiar a recém-nomeada CEO Satya Nadella.""",
"""A Mona Lisa é uma pintura a óleo do século XVI, criada por Leonardo. É realizada no Louvre, em Paris."""
]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
])
###Output
glove_840B_300 download started this may take some time.
Approximate size to download 2.3 GB
[OK!]
wikiner_840B_300 download started this may take some time.
Approximate size to download 14.5 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(
F.explode(
F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')
).alias("cols")
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']['entity']").alias('ner_label')
).show(truncate=False)
###Output
+-----------------------+---------+
|chunk |ner_label|
+-----------------------+---------+
|William Henry Gates III|PER |
|Ele |PER |
|Microsoft Corporation |ORG |
|Durante |PER |
|Microsoft |ORG |
|Gates |PER |
|CEO |ORG |
|Nascido |PER |
|Seattle |LOC |
|Washington |LOC |
|Gates |PER |
|Microsoft |ORG |
|Paul Allen |PER |
|Albuquerque |LOC |
|Novo México |LOC |
|Gates |PER |
|CEO |ORG |
|Gates |PER |
|Esta opinião |MISC |
|Gates |PER |
+-----------------------+---------+
only showing top 20 rows
|
Visualization/visualizing_geometries.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
ee_layers.append(EarthEngineLayer(ee_object=polygon, vis_params={'color':'FF0000'}))
ee_layers.append(EarthEngineLayer(ee_object=planarPolygon, vis_params={'color':'000000'}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____ |
ipython/Tools/Determine harmonic frequencies scaling factor.ipynb | ###Markdown
ARC Tools Determine harmonic frequency scaling factors for levels of theory Based on DOI: 10.1016/j.cpc.2016.09.004
###Code
from arc.utils.scale import determine_scaling_factors
levels_of_theory = ['b3lyp/6-31g*', 'wb97xd/6-311++g(d,p)', 'ccsd(t)/cc-pvtz']
harmonic_freq_scaling_factors = determine_scaling_factors(levels_of_theory)
###Output
_____no_output_____
###Markdown
ARC Tools Determine harmonic frequency scaling factors for levels of theory Based on DOI: 10.1016/j.cpc.2016.09.004 input parameters:
###Code
levels_of_theory = ['b3lyp/6-31g*', 'wb97xd/6-311++g(d,p)', 'ccsd(t)/cc-pvtz']
from arc.utils.scale import determine_scaling_factors
harmonic_freq_scaling_factors = determine_scaling_factors(levels_of_theory)
###Output
_____no_output_____
###Markdown
ARC Tools Determine harmonic frequency scaling factors for levels of theory Based on DOI: 10.1016/j.cpc.2016.09.004 input parameters:
###Code
levels_of_theory = ['b3lyp/6-31g*', 'wb97xd/6-311++g(d,p)', 'ccsd(t)/cc-pvtz']
from arc.utils.scale import determine_scaling_factors
harmonic_freq_scaling_factors = determine_scaling_factors(levels_of_theory)
###Output
_____no_output_____ |
Python_Revision.ipynb | ###Markdown
Python Variables and Types of Data Variables Create a variable with name "x" and a value of 10. Execute.
###Code
x=10
###Output
_____no_output_____
###Markdown
Tell the computer to show you the value of that variable.
###Code
###Output
_____no_output_____
###Markdown
Can you think of a second way to obtain the same result?
###Code
###Output
_____no_output_____
###Markdown
On the same line, create four new variables: a,b,c, and d, that are equal to 10, 20, 30, and 40, respectively.
###Code
###Output
_____no_output_____
###Markdown
Tell the computer to show you the value corresponding to the variable "b".
###Code
###Output
_____no_output_____
###Markdown
Do the same for "d".
###Code
###Output
_____no_output_____
###Markdown
Numbers and Boolean Values Create a variable equal to "True".
###Code
###Output
_____no_output_____
###Markdown
Check its type.
###Code
###Output
_____no_output_____
###Markdown
Create a variable equal to 99.
###Code
###Output
_____no_output_____
###Markdown
Check its type.
###Code
###Output
_____no_output_____
###Markdown
Check the type of the value 0.99.
###Code
###Output
_____no_output_____
###Markdown
Turn 99 into a *float*.
###Code
###Output
_____no_output_____
###Markdown
Turn 0.99 into an integer. What value did you get?
###Code
###Output
_____no_output_____
###Markdown
Strings Assign the value of 100 to the variable "m".
###Code
###Output
_____no_output_____
###Markdown
With the help of the variable "m", write one line of code where the output after executuion would be *100 days*.*Hint:* *You could provide four answers to this question!*
###Code
###Output
_____no_output_____
###Markdown
Produce an output equal to *It's cool, isn't it?*
###Code
###Output
_____no_output_____
###Markdown
Fix the string below.
###Code
'Don't be shy
###Output
_____no_output_____
###Markdown
Produce an output equal to *Click "OK"*.
###Code
###Output
_____no_output_____
###Markdown
Include a plus sign in your line of code to produce *'Big Houses'*.
###Code
###Output
_____no_output_____
###Markdown
Include a trailing comma in your line of code to produce *Big Houses*.
###Code
###Output
_____no_output_____
###Markdown
Basic Python Syntax Arithmetic operators Combine 15 and 23.
###Code
###Output
_____no_output_____
###Markdown
Subtract 50 from 26.
###Code
###Output
_____no_output_____
###Markdown
Divide 20 by 4.
###Code
###Output
_____no_output_____
###Markdown
Divide 22 by 4.
###Code
###Output
_____no_output_____
###Markdown
Obtain the remainder of the division of 22 by 4.
###Code
###Output
_____no_output_____
###Markdown
Divide the float 22 by 4.
###Code
###Output
_____no_output_____
###Markdown
Multiply 6 by 8.
###Code
###Output
_____no_output_____
###Markdown
Raise 15 to the power of 2.
###Code
###Output
_____no_output_____
###Markdown
The double-equality sign Demonstrate that 100 is not equal to 98.
###Code
###Output
_____no_output_____
###Markdown
Reassign Values Assign the value of 14 to a variable p.
###Code
###Output
_____no_output_____
###Markdown
Calculate p + 10.
###Code
###Output
_____no_output_____
###Markdown
Now, assign 30 to the variable p.
###Code
###Output
_____no_output_____
###Markdown
Calculate p + 10.
###Code
###Output
_____no_output_____
###Markdown
Observe how the value of p is always the last one you have assigned. Line Continuation Add a backslash in the code below, so it is a one-line code. Observe the change in the result.
###Code
15 + 31
- 26
###Output
_____no_output_____
###Markdown
Indexing Elements Extract the letter 'B' from "Bingo!".
###Code
###Output
_____no_output_____
###Markdown
Extract the letter "u" from "Constitution".
###Code
###Output
_____no_output_____
###Markdown
Structure Your Code with Indentation Use indentation properly to print the result of the function with an argument of 3.
###Code
def ten(x):
x = 10
return x
print (ten(3))
###Output
_____no_output_____
###Markdown
Operators Comparison Operators Verify that 25 is smaller than 30.
###Code
###Output
_____no_output_____
###Markdown
Verify that 5 multiplied by 3 is less than or equal to 5 to the power of 3.
###Code
###Output
_____no_output_____
###Markdown
Verify that 100 is equal to 10 square.
###Code
###Output
_____no_output_____
###Markdown
Verify that 53 is not equal to 46.
###Code
###Output
_____no_output_____
###Markdown
Logical and Identity Operators *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Check whether the following code is True or False. False or not True and not False
###Code
###Output
_____no_output_____
###Markdown
True and not False and True or not False
###Code
###Output
_____no_output_____
###Markdown
True or False and False
###Code
###Output
_____no_output_____
###Markdown
False and True or False
###Code
###Output
_____no_output_____
###Markdown
Using an identity operator, verify that 10 is not the same as 12.
###Code
###Output
_____no_output_____
###Markdown
Using an identity operator, verify that 50 is the same as 50.
###Code
###Output
_____no_output_____
###Markdown
Conditional Statements Introduction to the IF statement Create a two-line code that prints "The condition has been satisfied" if 5 is greater than 2.
###Code
###Output
_____no_output_____
###Markdown
Assign 10 to the variable x and 25 to the variable y. In the same cell, create 2 conditional statements. Let the first one print "Both conditions are correct" if x is greater then 3 and y is greater than 13. Let the second one print "At least one of the conditions is false" if x is less than or equal to 3 or y is less than or equal to 13. Change the values assigned to x and y and re-run the cell to verify your code still works.
###Code
###Output
_____no_output_____
###Markdown
Add an ELSE Statement Let x represent the number of orders received during a certain day. Assign 102 to x. Create a program that prints "A busy day" if x is greater than 100, and "A calm day" otherwise. Change x to 97 to verify your code works properly.
###Code
###Output
_____no_output_____
###Markdown
Else if, for Brief - ELIF Assign 200 to x.Create the following piece of code:If x > 200, print out "Big"; If x > 100 and x <= 200, print out "Average"; and If x <= 100, print out "Small".Use the If, Elif, and Else keywords in your code.
###Code
###Output
_____no_output_____
###Markdown
Change the initial value of x to see how your output will vary. ****** Keep the first two conditions of the previous code. Add a new ELIF statement, so that, eventually, the program prints "Small" if x >= 0 and x <= 100, and "Negative" if x < 0. Let x carry the value of 50 and then of -50 to check if your code is correct.
###Code
###Output
_____no_output_____
###Markdown
Functions Creating a Function with a Parameter Define a function that returns a value equal to its argument multiplied by 2.
###Code
###Output
_____no_output_____
###Markdown
Define a funciton that returns a float value equal to its argument divided by 2.
###Code
###Output
_____no_output_____
###Markdown
Another Way to Define a Function Define a function that states the value of the argument accompanied by the phrase "Raised to the power of 2:" and returns a value equal to its argument raised to the power of 2. This time, use a new variable, called "result", in the body of the Function. Call the function with some argument to verify it works properly.*Hint: Your knowledge about stating multiple elements on a line can be of great help in solving this exercise!*
###Code
###Output
_____no_output_____
###Markdown
Using a Function in Another Function Define a function that adds 5 to the parameter. Then, define another function that will multiply the newly obtained number by 3.Verify your code was correct by calling the second function with an argument of 5. Was your output equal to 30?
###Code
###Output
_____no_output_____
###Markdown
Combining Conditional Statements and Functions Define a function, called **compare_the_two()**, with two arguments. If the first one is greater than the second one, let it print "Greater". If the second one is greater, it should print "Less". Let it print "Equal" if the two values are the same number.
###Code
###Output
_____no_output_____
###Markdown
Notable Built-In Functions in Python Obtain the maximum number among the values 25, 65, 890, and 15.
###Code
###Output
_____no_output_____
###Markdown
Obtain the minimum number among the values 25, 65, 890, and 15.
###Code
###Output
_____no_output_____
###Markdown
Find the absolute value of -100
###Code
###Output
_____no_output_____
###Markdown
Round the value of 55.5. Did you obtain 56.0?
###Code
###Output
_____no_output_____
###Markdown
Round 35.56789 to the third digit.
###Code
###Output
_____no_output_____
###Markdown
Find the sum of all elements in the provided list, called "Numbers".
###Code
Numbers = [1, 5, 64, 24.5]
###Output
_____no_output_____
###Markdown
Use a built-in function to raise 10 to the power of 3.
###Code
###Output
_____no_output_____
###Markdown
How many characters are there in the word "Elephant"?
###Code
###Output
_____no_output_____
###Markdown
Create a function, called "distance_from_zero", that returns the absolute value of a provided single argument and prints a statement "Not Possible" if the argument provided is not a number.Call the funtion with the values of -10 and "cat" to verify it works correctly.
###Code
###Output
_____no_output_____
###Markdown
Sequences Lists Create a list, called "Numbers". Let it contain the numbers 10, 25, 40, and 50.
###Code
###Output
_____no_output_____
###Markdown
Print the element at index 2 from the list.
###Code
###Output
_____no_output_____
###Markdown
Print the 0th element.
###Code
###Output
_____no_output_____
###Markdown
Print the third-to-last element using a minus sign in the brackets.
###Code
###Output
_____no_output_____
###Markdown
Substitute the number 10 with the number 15.
###Code
###Output
_____no_output_____
###Markdown
Delete the number 25 from the Numbers list.
###Code
###Output
_____no_output_____
###Markdown
Help Yourself with Methods Append the number 100 to the Numbers list.
###Code
Numbers = [15, 40, 50]
###Output
_____no_output_____
###Markdown
With the help of the "extend method", add the numbers 115 an 140 to the list.
###Code
###Output
_____no_output_____
###Markdown
Print a statement, saying "The fourth element of the Numbers list is:" and then designate the value of the fourth element. Use a trailing comma.
###Code
###Output
_____no_output_____
###Markdown
How many elements are there in the Numbers list?
###Code
###Output
_____no_output_____
###Markdown
List Slicing
###Code
Numbers = [15, 40, 50, 100, 115, 140]
###Output
_____no_output_____
###Markdown
Using list slicing, obtain the numbers 100 and 115.
###Code
###Output
_____no_output_____
###Markdown
Using slicing, extract the first four elements from the list.
###Code
###Output
_____no_output_____
###Markdown
Using slicing, extract all the elements from the list from the 3rd position onwards.
###Code
###Output
_____no_output_____
###Markdown
Using slicing, extract the last 4 elements from the list.
###Code
###Output
_____no_output_____
###Markdown
Which is the position of the value 15?
###Code
###Output
_____no_output_____
###Markdown
Create a list, called "Two_Numbers". Let its elements be the values 1 and 2. Then, create a new one, named "All_Numbers", that will containt both the "Numbers" and the "Two_Numbers" lists.
###Code
###Output
_____no_output_____
###Markdown
Sort all the numbers in the "Numbers" list from the largest to the smallest.
###Code
###Output
_____no_output_____
###Markdown
Tuples Create a tuple, called "Cars", with elements "BMW", "Dodge", and "Ford".
###Code
###Output
_____no_output_____
###Markdown
Access the second element of this tuple.
###Code
###Output
_____no_output_____
###Markdown
Call a method that would allow you to extract the provided name and age separately. Then print the "name" and "age" values to see if you worked correctly.
###Code
name, age = 'Peter,24'
###Output
_____no_output_____
###Markdown
Create a function that takes as arguments the two values of a rectangle and then returns the Area and the Perimeter of the rectangle.Call the function with arguments 2 and 10 to verify it worked correctly.
###Code
###Output
_____no_output_____
###Markdown
Dictionaries *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* This is the menu of a close-by restaurant:
###Code
Menu = {'meal_1':'Spaghetti', 'meal_2':'Fries', 'meal_3':'Hamburger', 'meal_4':'Lasagna'}
###Output
_____no_output_____
###Markdown
What is the second meal in the list?
###Code
###Output
_____no_output_____
###Markdown
Add a new meal - "Soup".
###Code
###Output
_____no_output_____
###Markdown
Replace the Hamburger with a Cheeseburger.
###Code
###Output
_____no_output_____
###Markdown
Attach the Desserts list in the form of a sixth meal.
###Code
Dessert = ['Pancakes', 'Ice-cream', 'Tiramisu']
###Output
_____no_output_____
###Markdown
Create a new dictionary that contains the first five meals as keys and assign the following five values as prices (in dollars):10, 5, 8, 12, 5. Start by *Price_list = {}*.
###Code
###Output
_____no_output_____
###Markdown
Use the *.get()* method to check the price of the Spaghetti.
###Code
###Output
_____no_output_____
###Markdown
Iteration For Loops Create a For loop that prints every digit on a new line.
###Code
digits = [0,1,2,3,4,5,6,7,8,9]
###Output
_____no_output_____
###Markdown
Adjust the code, so the digits are all printed on the same line.
###Code
###Output
_____no_output_____
###Markdown
While Loops and Incrementing *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Create a while loop that will print all odd numbers from 0 to 30 on the same row. *Hint: There are two ways in which you can create the odd values!*
###Code
###Output
_____no_output_____
###Markdown
Create Lists with the range() Function Use the range() function to create a list with all numbers from 1 to 10.
###Code
###Output
_____no_output_____
###Markdown
Use the range() function to create a list with all numbers from 0 to 19.
###Code
###Output
_____no_output_____
###Markdown
Use the range function to create a list with all even numbers from 0 to 30 included.
###Code
###Output
_____no_output_____
###Markdown
Use Conditional Statements and Loops Together Create a For loop that will print all the variables from a given list multiplied by 2. Let the list contain all numbers from 1 to 10. Create it with the help of the range() function.
###Code
###Output
_____no_output_____
###Markdown
Create a little program that runs a loop over all values from 1 to 30. Let it print all Odd numbers, and in the place of the even numbers, it should print "Even".Help yourself with the range() function to solve this exercise.
###Code
###Output
_____no_output_____
###Markdown
You have the following list of numbers. Iterate over this list, printing out each list value multiplied by 10.Find two solutions of this problem.
###Code
n = [1,2,3,4,5,6]
###Output
_____no_output_____
###Markdown
All in - Conditional Statements, Functions, and Loops You are provided with the 'nums' list. Complete the code in the cell that follows. Use a while loop to count the number of values lower than 20. *Hint: This exercise is similar to what we did in the video lecture. You might prefer using the x[item] structure for indicating the value of an element from the list.*
###Code
nums = [1,12,24,31,51,70,100]
###Output
_____no_output_____
###Markdown
Iterating over Dictionaries In this exercise you will use the same dictionaries as the ones we used in the lesson - "prices" and "quantity". This time, don't just calculate all the money Jan spent. Calculate how much she spent on products with a price of 5 dollars or more.
###Code
prices = {
"box_of_spaghetti" : 4,
"lasagna" : 5,
"hamburger" : 2
}
quantity = {
"box_of_spaghetti" : 6,
"lasagna" : 10,
"hamburger" : 0
}
money_spent = 0
###Output
_____no_output_____
###Markdown
And how much did Jan spent on products that cost less than 5 dollars?
###Code
prices = {
"box_of_spaghetti" : 4,
"lasagna" : 5,
"hamburger" : 2
}
quantity = {
"box_of_spaghetti" : 6,
"lasagna" : 10,
"hamburger" : 0
}
money_spent = 0
###Output
_____no_output_____
###Markdown
Python small revision for programmers[Official Python Language Reference](https://docs.python.org/3.7/)[Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) Main Characteristics* Not explicit declaration of types* Interpreted (with JIT compiler)* Functional and OOP Oriented* Namespaces and Modules* Indentation used to separete code blocks Reserved KeywordsThis are the reserved Python keywords
###Code
help('keywords')
###Output
Here is a list of the Python keywords. Enter any keyword to get more help.
False def if raise
None del import return
True elif in try
and else is while
as except lambda with
assert finally nonlocal yield
break for not
class from or
continue global pass
###Markdown
Statements and Variables Variable namesPython variable names can be constructed using a combination of letters (lower and uppercase), digits and underscore (_), and must not begin with a digit.**Valid Names:**```Variable1 variable_Spacialv0_b1_```**Invalid Names:**```1Variable0_variable``` Simple StatementsAre logical instructions that pyrhon can execute Expressions
###Code
1 + 2
(1 + 2) * 5
2**8
eval("2+5")
###Output
_____no_output_____
###Markdown
Variable assignment
###Code
variable1 = 10
variable2 = variable1
print(variable1, variable2)
###Output
10 10
###Markdown
Multi Line StatementUse the inverted slash (\\) to continue a statement in other line
###Code
(10 + 15) \
* 4 \
/5
###Output
_____no_output_____
###Markdown
Data TypesPython has a number of native data types:* Numbers* Strings* Bytes* Booleans* Lists* Tuples* Sets* Dictionaries NumberThe Python numerical values can be classified as `int`, `float` and `complex`. Complex numbers can be represented adding a `j` or a `J` to the end of the number as the imaginary part or use the `complex` function.Examples:
###Code
i = 4
print(f"The type of the n variable is {type(i)} and it's value: {i}")
f = 4.5
print(f"The type of the f variable is {type(f)} and it's value: {f}")
a = i*f
print(f"The type of the a variable is {type(a)} and it's value: {a}")
c = 4+5j
print(f"The type of the c variable is {type(c)} and it's value: {c}")
d = complex(6,15)
print(f"The type of the d variable is {type(d)} and it's value: {d}")
e = c+d
print(f"The type of the e variable is {type(e)} and it's value: {e}")
###Output
The type of the n variable is <class 'int'> and it's value: 4
The type of the f variable is <class 'float'> and it's value: 4.5
The type of the a variable is <class 'float'> and it's value: 18.0
The type of the c variable is <class 'complex'> and it's value: (4+5j)
The type of the d variable is <class 'complex'> and it's value: (6+15j)
The type of the e variable is <class 'complex'> and it's value: (10+20j)
###Markdown
Some interesting facts:* **Integers** do not have a size limit in Pyhton and are only limited by memory* **Floats** are double 64-bit double precision values and. Useful functionsTo use some math functions one must first import the math library```import math``` Find **minimum** and **maximum** of a list of values
###Code
import math
min(1.0, 4, 10.5)
max(1, 10, 30)
###Output
_____no_output_____
###Markdown
Round values
###Code
# Round to the lower integer
math.floor(3.777)
# Round to the upper integer
math.ceil(3.777)
# Round to the nearest integer
round(3.777)
round(3.5)
round(3.499999)
###Output
_____no_output_____
###Markdown
StringA Python String can be represented enclosed by both single `'` or double `"` quotes.Examples:
###Code
str1 = 'I\'n a String using single quotes'
print(str1)
str2 = "I'm a String using double quotes"
print(str2)
###Output
I'n a String using single quotes
I'm a String using double quotes
###Markdown
Python can use multi line strings using the triple double quotes:
###Code
str = """I'm a multiline string,
and I can fit in more than one line."""
print(str)
###Output
I'm a multiline string,
and I can fit in more than one line.
###Markdown
To construct formatted strings a great option is use the `f` prefixed string. They ar constructed places an `f` before the first quotation mark. Inside the string one case inser any expression surround it with braces `{}`. C's `sprintf` like formatting can be used using the method `format` of the class `string`.[Format Cheat Sheet](https://craigmbooth.com/images/blog/pythonstring/PythonNumberFormatting.pdf)
###Code
str1 = f"The area of a circle o radius 4 is {math.pi*4**2}"
print(str1)
str2 = "The area of a circle o radius 4 is {:.2f}".format(math.pi*4**2)
print(str2)
###Output
The area of a circle o radius 4 is 50.26548245743669
The area of a circle o radius 4 is 50.27
###Markdown
Useful functionsThe `string` class has some useful manipulation funcions **Lower** and **Upper** case a string
###Code
"I want to be upper cased".upper()
"I want to be lower cased".lower()
###Output
_____no_output_____
###Markdown
String operatorsA substring can be constructed using the slice `[]` and range slice `[:]` operators
###Code
# T h i s i s a s t r i n g
#------------------------------------------------------------
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1
str = "This is a string"
str[0]
str[-16]
str[15]
str[0:4]
str[-13:-16]
str[-16:4]
str[:4]
###Output
_____no_output_____
###Markdown
String **contactenation** can be done using the `+` operator:
###Code
str1 = "One of the most used languages for data science is: "
str2 = "Python"
print(str1+str2)
###Output
One of the most used languages for data science is: Python
###Markdown
The **repetition** operator can be used to construct a new string repeating the given one n times:
###Code
str = "**NICE"
str*3+"**"
###Output
_____no_output_____
###Markdown
The **menbership** operator can test if a string is part of other string:
###Code
str = "I want a apple pie for disert today"
"apple" in str
"pie disert" in str
"banana" not in str
###Output
_____no_output_____
###Markdown
The `for` statement can be used to iterarate through the characters of the string:
###Code
str = "I am a string"
for var in str: print(var)
###Output
I
a
m
a
s
t
r
i
n
g
###Markdown
One can **escape characters** using the backslach `\` operator
###Code
str="This is a \"python\" string"
str
str="Another string to escaping"
str
###Output
_____no_output_____
###Markdown
[Other escape codes](https://www.w3schools.com/python/gloss_python_escape_characters.asp) BooleanThe Boolean type represents the two logical values `True` and `False`. These are language constants.
###Code
flower = True
petals = False
if flower == True and petals == True:
print('You have a rose!?')
else:
print('You have a Cyathium')
not flower
not 10
not 0
bool(10)
not not 10
###Output
_____no_output_____
###Markdown
ListThis lists are the most basic multi value storage type in Python. The are similar to the arrays in other languages. Each item is stored in a fixed position given by a numerical index. This first position has the index `0` and the next ones `1`, `2...` and so on. **Creating** a new list:
###Code
empty_list = []
empty_list
simple_list = [ 1, 2, 3.5, -20]
simple_list
###Output
_____no_output_____
###Markdown
**Acessing** a list
###Code
simple_list[0]
simple_list[3]
simple_list[-1]
simple_list[0:2] # Be careful, the last index of the range is not included!
simple_list[:2]
simple_list[2:]
###Output
_____no_output_____
###Markdown
Lists **are not restricted to a single type**:
###Code
hetereogeneous_list = [ 1, "orange", True, 4.0, [1, 2, 3]]
hetereogeneous_list
hetereogeneous_list[4]
hetereogeneous_list[4][1]
###Output
_____no_output_____
###Markdown
Discovering the **size** of a list
###Code
len(hetereogeneous_list)
###Output
_____no_output_____
###Markdown
Creating a list with **repeated** content
###Code
zeros_list = [0]*5
zeros_list
###Output
_____no_output_____
###Markdown
**Modifying** a value in a list
###Code
zeros_list[0] = 1
zeros_list
###Output
_____no_output_____
###Markdown
**Adding** elements to the list
###Code
new_list = []
#new_list[0] = 1 # does not work
new_list.append(0)
new_list.append("apple")
new_list
###Output
_____no_output_____
###Markdown
Creating lists using **list comprehension**.This is a method of building lists using an expression with iteration and/or filtering. The syntax is the following:`new_list = [expression(varname) for varname in oldList if filter(iter)]`
###Code
new_list = [evennumber for evennumber in [0, 1, 2, 3, 4, 5, 6] if evennumber % 2 == 0]
new_list
new_list = [evennumber*10 for evennumber in [0, 1, 2, 3, 4, 5, 6] if evennumber % 2 != 0]
new_list
new_list = [evennumber*100 for evennumber in [0, 1, 2, 3, 4, 5, 6]]
new_list
###Output
_____no_output_____
###Markdown
**Removing** elements from the list
###Code
sample_list = [ 1, 2, 1, "orange", 5, 100, 200, 300 ]
sample_list.remove(1) # Remove the first element with the given value
sample_list
sample_list.remove('orange') # Remove the element from the list
sample_list
sample_list.pop(-1) # Remove from the index position specified
sample_list
del sample_list[2] #Remove from the index position specified
sample_list
del sample_list[0:2]
sample_list
###Output
_____no_output_____
###Markdown
**Searching** for the index of a element
###Code
sample_list = [ 'apples', 'oranges', 'grapes', 'pineapples', 'bananas', 'oranges']
sample_list
'oranges' in sample_list
'fig' in sample_list
sample_list.index('oranges') # fined the index
sample_list.index('bananas')
sample_list.index('oranges', 2) # specifying the start index
try:
sample_list.index('fig') ## an ValueError exception is raised
except ValueError as err:
print('Value not found')
###Output
Value not found
###Markdown
**Combining** lists
###Code
list_1 = [ 1, 2, 3]
list_2 = [ 4, 5, 6]
list_3 = list_1 + list_2
list_3
list_1.extend(list_2) # extends the same list
list_1
###Output
_____no_output_____
###Markdown
Useful functions
###Code
sample_list = [ True, 10, "orange" ]
all(sample_list) # returns True if all elements of the list are coerced to True
sample_list.append(False)
sample_list
all(sample_list)
any(sample_list) # returns True if any element of the list is coerced to True
sample_list = [ 20, "apples", 100 ]
enum_list = enumerate(sample_list) # Creates an enumeration object from the list
for index,item in enum_list:
print(index, item)
sample_list = [ 5, 2, 6.0, 3.5, 4.5 ]
min(sample_list) # returns the minimal value
max(sample_list) # returns the maximal value
sorted(sample_list) # sort the contents of the list
sorted(sample_list, reverse=True)
sum(sample_list) # sum the contents of the list
###Output
_____no_output_____
###Markdown
SetSets in Python represents and unindexed collection of immutable unique values. This is similar to the math sets. The sets can contain elements of heterogeneous types but the types has to be hashable. **Creating** a set
###Code
set_1 = { 1, 2, 3.5 , "apple", (3, 4) }
set_1
print(hash(1))
print(hash(2))
print(hash(3.5))
print(hash("apple"))
print(hash((3, 4)))
set_2 = { 10, 20, 30 } # or set([10 , 20, 10])
set_2
###Output
_____no_output_____
###Markdown
**Adding** elements to a set
###Code
new_set = set() # Created an empty set
new_set.add(1) # adds a single element
new_set
new_set.update([ 1, 2, 3 ], { 4, 5, 5 }) # adds multiple elements
new_set
###Output
_____no_output_____
###Markdown
**Removing** elements from a set
###Code
sample_set = { 1, 2, 3 }
sample_set.discard(2)
sample_set
sample_set.discard(2)
sample_set
sample_set.remove(1) # The same of discard but can raise an exception
sample_set
try:
sample_set.remove(1)
except KeyError as err:
print('item dos not exists on the set')
###Output
item dos not exists on the set
###Markdown
Set operations[image from Learn by Example](https://www.learnbyexample.org/python-set/)Set **Union**
###Code
set_1 = { "apples", "bananas" }
set_2 = { "apples", "oranges "}
set_1 | set_2
###Output
_____no_output_____
###Markdown
Set **Intersection**
###Code
set_1 & set_2
###Output
_____no_output_____
###Markdown
Set **difference**
###Code
set_1 - set_2
###Output
_____no_output_____
###Markdown
Set **simmetric difference**
###Code
set_1 ^ set_2
###Output
_____no_output_____
###Markdown
TupleTuples a a immutable collection of heterogeneous elements.It's very similar to the list, but the elements and the collection can not change. It's represented in Python enclosed by parenthesis `( )`
###Code
sample_tuple = (1, 2.0, "orange")
sample_tuple
sample_tuple[1]
###Output
_____no_output_____
###Markdown
Change the value is not allowed
###Code
try:
sample_tuple[1] = 3
except TypeError as err:
print(err)
sample_tuple = tuple([1, 2, 2, 3]) # another form of creating a tuple
sample_tuple
###Output
_____no_output_____
###Markdown
**Counting** the number of occurences of a element
###Code
sample_tuple.count(2)
###Output
_____no_output_____
###Markdown
Calculating the **lenght** of a tuple
###Code
len(sample_tuple)
###Output
_____no_output_____
###Markdown
DictionaryA dictionary stores a collection of key/value pairs. The keys a values are separed by the colon `:` and the elements separated by commas `,`. All the elements are enclosed by the `{}`. The values can also be heterogeneous. **Creating** a dictionary
###Code
sample_dict = { "it": "Italy", "br": "Brazil", "us": "United States" }
sample_dict
###Output
_____no_output_____
###Markdown
**Accessing** a dictionary
###Code
sample_dict["it"]
try:
sample_dict[0]
except KeyError as err:
print("You can not access by index")
sample_dict = { 1: "first", 2: "second", 3.0: "third" }
sample_dict
sample_dict[2]
sample_dict[3.0]
products = { "product1": { "name": "TV LCD", "value": 300 }, \
"product2": { "name": "PS4", "value": 250 }}
products
products["product2"]["name"]
"product1" in products # test if a key exists
###Output
_____no_output_____
###Markdown
**Updating** a dictionary
###Code
fruit_colors = { "banana" : "yellow" }
fruit_colors
fruit_colors["grape"] = "purple" # can use the assign operator
fruit_colors
fruit_colors.update({ "orange": "orange", "strawberry": "red", "grape": "violet"}) # or use update method
fruit_colors
###Output
_____no_output_____
###Markdown
**Removing** elements from a dictionary
###Code
sample_dict = { "it": "Italy", "br": "Brazil", "us": "United States", "fr": "france" }
sample_dict
del sample_dict["it"] # using the del operator
sample_dict
sample_dict.pop("br") # the same result using the pop method
sample_dict
sample_dict.popitem() # remove a random item from the dictionary
sample_dict.clear() # clears all the dictionary
sample_dict
###Output
_____no_output_____
###Markdown
**Iterate** through a dictionary
###Code
fruit_colors = {'banana': 'yellow','grape': 'violet','orange': 'orange','strawberry': 'red'}
fruit_colors
for key in fruit_colors:
print(f"fruit color: {key} => {fruit_colors[key]}")
for key, value in fruit_colors.items():
print(f"fruit color: {key} => {value}")
###Output
fruit color: banana => yellow
fruit color: grape => violet
fruit color: orange => orange
fruit color: strawberry => red
###Markdown
**Dictionary** comprehension
###Code
cardinal = [ "first", "second", "third", "fourth", "fifth" ]
dict(enumerate(cardinal))
ordinal_to_cardinal = { (o+1):c for o, c in enumerate(cardinal) }
ordinal_to_cardinal
###Output
_____no_output_____
###Markdown
Operators Operators ListThe Python operators list is shown below. There is a correspondent function that cam be used in place of the operator.  [Image from Python documentation](https://docs.python.org/3.4/library/operator.html) Operators Precedence[Image from Tutorials Point](https://www.tutorialspoint.com/python/operators_precedence_example.htm) Control Statements If Else **Basic Syntax:**```if logic expression: code block```
###Code
flag = True
if flag:
print("Flag was on")
if flag == True:
print("Flag was on")
if 10:
print("True")
if 0:
print("True")
if "string":
print("True")
if "":
print("True")
if 0 or "" or None:
print("At least one was True")
###Output
_____no_output_____
###Markdown
**If Else Syntax**```if logic expression: code blockelse: else code block```
###Code
flag = False
if flag:
print("Flag was on")
else:
print("Flag was off")
if 0 or "" or None:
print("At least one was True")
else:
print("None was True")
###Output
None was True
###Markdown
**If Elif Syntax**```if logic expression: code blockelif logic expression: elif code blockelif logic expression: elif code block . . .else: else code block```
###Code
if 4 > 5:
print("2 > 5")
elif 5 > 5:
print("5 > 5")
elif 6 > 5:
print("6 > 5")
else:
print("None of the options")
###Output
6 > 5
###Markdown
For loop```for varname in sequence: code block```
###Code
sample_list = [ 1, 2, 3, 4 ]
for i in sample_list:
print(i)
for i in range(1,5):
print(i)
for i in range(3, 0, -1):
if i == 1:
break; # interrupring a loop
print(i)
###Output
3
2
###Markdown
While loop```while logical expression: code block```
###Code
counter = 3
while counter > 0:
print(counter)
counter -=1
###Output
3
2
1
###Markdown
FunctionsFunctions a blocks of statements than can be called repeatdly passing some paremeter and returning a result Defining a functionA function is defined using the `def` keyword
###Code
def my_code():
print("Executing my_code()")
my_code()
my_code()
###Output
Executing my_code()
Executing my_code()
###Markdown
Using parameters:
###Code
def my_sum(a, b):
return a+b
my_sum(10, 20)
def mult_return(a, b):
return a+b, a-b
c, d = mult_return(10, 20)
print(c, d)
###Output
30 -10
###Markdown
Variable parameters:
###Code
def my_func(a, *b):
print('---')
print(f"a: {a}")
for i in b:
print("other parameter: ", i)
my_func(1)
my_func(1, 2)
my_func(1, 2, 3)
###Output
---
a: 1
---
a: 1
other parameter: 2
---
a: 1
other parameter: 2
other parameter: 3
###Markdown
Default values:
###Code
def concat_strings(strings, separator=' '):
ret = ''
for i, s in enumerate(strings):
if i == 0:
ret += s
else:
ret += separator + s
return ret
print(concat_strings(['one', 'two', 'three']))
print(concat_strings(['one', 'two', 'three'], '**'))
###Output
one two three
one**two**three
###Markdown
Calling functions with argument names:
###Code
def sample_fn(param1 = 1, param2 = 2, param3=3):
print(f"param1: {param1}, param2: {param2}, pararam3: {param3}")
sample_fn(param2=20)
sample_fn(10, param3=30)
###Output
param1: 1, param2: 20, pararam3: 3
param1: 10, param2: 2, pararam3: 30
###Markdown
Returning a function:
###Code
def create_multiply_by(a):
def dummy(b):
return a*b
return dummy
mult_by_100 = create_multiply_by(100)
mult_by_100(10)
###Output
_____no_output_____
###Markdown
Variable scopeThe variable scope in Python is hierarchical. The resolution begins at function definition level and goes up until the global level and it not defined it tries the buildin level. Local scope
###Code
def func():
myvar = 1
print("myvar: ", myvar)
func()
#print("myvar: ", myvar)
def func1():
myvar = 1
def func2():
print("myvar: ", myvar)
func2()
print("myvar: ", myvar)
func1()
#print("myvar: ", myvar)
myvar = 1
def func1():
def func2():
print("myvar: ", myvar)
func2()
print("myvar: ", myvar)
func1()
print("myvar: ", myvar)
###Output
myvar: 1
myvar: 1
myvar: 1
###Markdown
ClassesPython also supports Object Oriented Programming (OOP). One can define classes and objects.Classes are a group of functions and variables used to instantiate an object based on the class. Defining a class and instantiate an objectInside classes, the `self` special variable is used to get access to the instance object.
###Code
class Square:
def __init__(self, side_length): # This is the class constructor
self.side_length = side_length
def perimeter(self):
return 4*self.side_length
def area(self):
return self.side_length**2
sq1 = Square(1)
sq2 = Square(2)
print(f"sq1 area: {sq1.area()} - sq1 perimeter: {sq1.perimeter()}")
print(f"sq2 area: {sq2.area()} - sq2 perimeter: {sq2.perimeter()}")
###Output
sq1 area: 1 - sq1 perimeter: 4
sq2 area: 4 - sq2 perimeter: 8
###Markdown
One class can have class attributes (the same for all instaces):
###Code
class Rectangle:
craeted_retangles = 0
def __init__(self, side_a, side_b): # This is the class constructor
self.side_a = side_a
self.side_b = side_b
Rectangle.craeted_retangles += 1
def perimeter(self):
return 2*self.side_a + 2*self.side_b
def area(self):
return self.side_a **self.side_b
rc1 = Rectangle(1, 2)
rc2 = Rectangle(2, 3)
print(f"rc1 area: {rc1.area()} - rc1 perimeter: {rc1.perimeter()}")
print(f"rc2 area: {rc2.area()} - rc2 perimeter: {rc2.perimeter()}")
print(f"Created rectangles: {Rectangle.craeted_retangles}")
###Output
rc1 area: 1 - rc1 perimeter: 6
rc2 area: 8 - rc2 perimeter: 10
Created rectangles: 2
###Markdown
Operator overloading
###Code
class Square:
def __init__(self, side_length): # This is the class constructor
self.side_length = side_length
def perimeter(self):
return 4*self.side_length
def area(self):
return self.side_length**2
def __mul__(self, factor):
self.side_length *= factor
return self
#return Square(self.side_length*factor)
sq1 = Square(1)
sq2 = Square(2)
print(f"sq1 area: {sq1.area()} - sq1 perimeter: {sq1.perimeter()}")
print(f"sq2 area: {sq2.area()} - sq2 perimeter: {sq2.perimeter()}")
sq3 = sq1 * 3
print(f"sq3 area: {sq3.area()} - sq3 perimeter: {sq3.perimeter()}")
print(f"sq1 area: {sq1.area()} - sq1 perimeter: {sq1.perimeter()}")
###Output
sq1 area: 1 - sq1 perimeter: 4
sq2 area: 4 - sq2 perimeter: 8
sq3 area: 9 - sq3 perimeter: 12
sq1 area: 9 - sq1 perimeter: 12
###Markdown
LambdasPython uses the `lambda` operator to create anonymous functions. This functions simplify the syntax of serveral mapping and filtering operations. **Mapping**Mapping is used to apply a fnction to each object of a list of iterables
###Code
my_list = [1, 2, 3]
doubled_list = list(map(lambda v: v*2, my_list))
doubled_list
###Output
_____no_output_____
###Markdown
**Filtering**Filtering is used to apply a fnction to each object of a list of iterables, if the result of the applied function is `True`, the object is manteined, if not it is discarded.
###Code
fruits = [ "banana", "orange", "lemonn", "grape", "avocado"]
def exclude_from_list(l, exclusions):
return list(filter(lambda v: (v not in exclusions), l))
exclude_from_list(fruits, ["banana", "grape"])
###Output
_____no_output_____ |
07 Prepare Your Data For Machine Learning/0.2 Text Feature Processing.ipynb | ###Markdown
Extracting Features from Text Using Bag of Words, TF-IDF Transformation
###Code
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
Define a corpus of 4 documents with some repeated values
###Code
corpus = ['This is the first document.',
'This is the second document.',
'Third document. Document number three',
'Number four. To repeat, number four']
###Output
_____no_output_____
###Markdown
Use CountVectorizer to convert a collection of text documents to a "bag of words"
###Code
vectorizer = CountVectorizer()
bag_of_words = vectorizer.fit_transform(corpus)
bag_of_words
###Output
_____no_output_____
###Markdown
View what the "bag" looks like
###Code
print(bag_of_words)
###Output
(0, 0) 1
(0, 1) 1
(0, 7) 1
(0, 3) 1
(0, 9) 1
(1, 6) 1
(1, 0) 1
(1, 7) 1
(1, 3) 1
(1, 9) 1
(2, 10) 1
(2, 4) 1
(2, 8) 1
(2, 0) 2
(3, 5) 1
(3, 11) 1
(3, 2) 2
(3, 4) 2
###Markdown
Get the value to which a word is mapped
###Code
vectorizer.vocabulary_.get('document')
vectorizer.vocabulary_
import pandas as pd
print(pd.__version__)
pd.DataFrame(bag_of_words.toarray(), columns=vectorizer.get_feature_names())
###Output
_____no_output_____
###Markdown
Extend bag of words with TF-IDF weights
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
bag_of_words = vectorizer.fit_transform(corpus)
print(bag_of_words)
vectorizer.vocabulary_.get('document')
pd.DataFrame(bag_of_words.toarray(), columns=vectorizer.get_feature_names())
###Output
_____no_output_____
###Markdown
View all the words and their corresponding values
###Code
vectorizer.vocabulary_
###Output
_____no_output_____
###Markdown
Hashing Vectorizer* One issue with CountVectorizer and TF-IDF Vectorizer is that the number of features can get very large if the vocabulary is very large* The whole vocabulary will be stored in memory, and this may end up taking a lot of space* With Hashing Vectorizer, one can limit the number of features, let's say to a number n* Each word will be hashed to one of the n values* There will collisions where different words will be hashed to the same value* In many instances, peformance does not really suffer in spite of the collisions
###Code
from sklearn.feature_extraction.text import HashingVectorizer
vectorizer = HashingVectorizer(n_features=8)
feature_vector = vectorizer.fit_transform(corpus)
print(feature_vector)
###Output
(0, 0) -0.894427191
(0, 5) 0.4472135955
(0, 6) 0.0
(1, 0) -0.57735026919
(1, 3) 0.57735026919
(1, 5) 0.57735026919
(1, 6) 0.0
(2, 0) -0.755928946018
(2, 3) 0.377964473009
(2, 5) 0.377964473009
(2, 7) 0.377964473009
(3, 0) 0.316227766017
(3, 3) 0.316227766017
(3, 5) 0.632455532034
(3, 7) 0.632455532034
|
docs/cookbook/Notebooks on the fly.ipynb | ###Markdown
___Author: Mikael Koli___
###Code
import sys
sys.path.append("../..")
import jubox
jubox.__version__
###Output
_____no_output_____
###Markdown
Recipe: Notebooks on the flyThis is a short demo to demonstrate how to generate arbitrary notebooks "on the fly", completely in Python run environment.
###Code
from jubox import JupyterNotebook, CodeCell, RawCell, MarkdownCell
###Output
_____no_output_____
###Markdown
Pipeline example
###Code
# Library we will use
import datetime
# Supplementary function that we
# will insert to the notebook
def transform_func(df):
# This function is transfered from the parent notebook
# to this notebook
# start_date and category are globals given in the notebook
mask_time = df['date'] > start_time
mask_category = df['category'] == category
return df[mask_time & mask_category]
# Generating notebook using code
JupyterNotebook([
MarkdownCell("# This is a notebook example generated in Python"),
MarkdownCell("## Imports"),
CodeCell("import datetime \nimport pandas as pd \nimport numpy as np"),
MarkdownCell("## Params"),
CodeCell.from_variable_dict(
info="These values are constructed using Python dictionary/keyword arguments",
start_time=datetime.datetime(2020, 2, 3),
ategory="Blue"
),
MarkdownCell("# Functions"),
CodeCell.from_object(transform_func),
MarkdownCell("# Extract"),
CodeCell("""# This cell is formed from string
df = pd.DataFrame(
{
'date': pd.date_range('2020-02-01', periods=50, freq='D'),
'category': np.random.choice(['Blue', 'Red'], size=50)
}
)"""),
MarkdownCell("# Transform"),
CodeCell("""df = transform_func(df)"""),
MarkdownCell("# Display"),
CodeCell.from_file("python_source_files/info.py"),
])
###Output
_____no_output_____
###Markdown
Exploring Jubox Code using Jubox
###Code
JupyterNotebook([
MarkdownCell("# Jubox Cell Types"),
MarkdownCell("## Raw Cell"),
CodeCell.from_object(RawCell),
MarkdownCell("## Markdown Cell"),
CodeCell.from_object(MarkdownCell),
MarkdownCell("## Code Cell"),
CodeCell.from_object(CodeCell),
])
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.