path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
0-newbooks/AutonomousDrivingCookbook/AirSimE2EDeepLearning/DataExplorationAndPreparation.ipynb
###Markdown Step 0 - Data Exploration and Preparation OverviewOur goal is to train a deep learning model that can make steering angle predictions based on an input that comprises of camera images and the vehicle's last known state. In this notebook, we will prepare the data for our end-to-end deep learning model. Along the way, we will also make some useful observations about the dataset that will aid us when it comes time to train the model. What is End-to-End Deep Learning?End-to-end deep learning is a modeling strategy that is a response to the success of deep neural networks. Unlike traditional methods, this strategy is not built on feature engineering. Instead, it leverages the power of deep neural networks, along with recent hardware advances (GPUs, FPGAs etc.) to harness the incredible potential of large amounts of data. It is closer to a human-like learning approach than traditional ML as it lets a neural network map raw input to direct outputs. A big downside to this approach is that it requires a very large amount of training data which makes it unsuitable for many common applications. Since simulators can (potentially) generate infinite amounts of data, they are a perfect data source for end-to-end deep learning algorithms. If you wish to learn more, [this video](https://www.coursera.org/learn/machine-learning-projects/lecture/k0Klk/what-is-end-to-end-deep-learning) by Andrew Ng provides a nice overview of the topic.Autonomous driving is a field that can highly benefit from the power of end-to-end deep learning. In order to achieve SAE Level 4 Autonomy, cars need to be trained on copious amounts of data (it is not uncommon for car manufacturers to collect hundreds of petabytes of data every week), something that is virtually impossible without a simulator. With photo-realistic simulators like [AirSim](https://github.com/Microsoft/AirSim), it is now possible to collect a large amount of data to train your autonomous driving models without having to use an actual car. These models can then be fine tuned using a comparably lesser amount of real-world data and used on actual cars. This technique is called Behavioral Cloning. In this tutorial, you will train a model to learn how to steer a car through a portion of the Landscape map in AirSim using only one of the front facing webcams as visual input. Our strategy will be to perform some basic data analysis to get a feel for the dataset, and then train an end-to-end deep learning model to predict the correct steering control signals (a.k.a. "steering angle") given a frame from the webcam, and the car's current state parameters (speed, steering angle, throttle etc.).Before you begin, please make sure you have the dataset for the tutorial downloaded. If you missed the instructions in the readme file, [you can download the dataset from here](https://aka.ms/AirSimTutorialDataset).Let us start by importing some standard libraries.**NOTE: If you see text within > in some of the comments in these notebooks, it means you are expected to make a change in the accompanying code.** ###Code %matplotlib inline import numpy as np import pandas as pd import h5py import matplotlib.pyplot as plt from PIL import Image, ImageDraw import os import Cooking import random # << Point this to the directory containing the raw data >> RAW_DATA_DIR = 'data_raw/' # << Point this to the desired output directory for the cooked (.h5) data >> COOKED_DATA_DIR = 'data_cooked/' # The folders to search for data under RAW_DATA_DIR # For example, the first folder searched will be RAW_DATA_DIR/normal_1 DATA_FOLDERS = ['normal_1', 'normal_2', 'normal_3', 'normal_4', 'normal_5', 'normal_6', 'swerve_1', 'swerve_2', 'swerve_3'] # The size of the figures in this notebook FIGURE_SIZE = (10,10) ###Output _____no_output_____ ###Markdown First, let's take a look at the raw data. There are two parts to the dataset - the images and the .tsv file. First, let us read one of the .tsv files. ###Code sample_tsv_path = os.path.join(RAW_DATA_DIR, 'normal_1/airsim_rec.txt') sample_tsv = pd.read_csv(sample_tsv_path, sep='\t') sample_tsv.head() ###Output _____no_output_____ ###Markdown This dataset contains our label, the steering angle. It also has the name of the image taken at the time the steering angle was recorded. Let's look at a sample image - 'img_0.png' inside the 'normal_1' folder (more on folder naming later). ###Code sample_image_path = os.path.join(RAW_DATA_DIR, 'normal_1/images/img_0.png') sample_image = Image.open(sample_image_path) plt.title('Sample Image') plt.imshow(sample_image) plt.show() ###Output _____no_output_____ ###Markdown One immediate observation that we can make about this image is that **only a small portion of the image is of interest**. For example, we should be able to determine how to steer the car by just focusing on the ROI of the image shown in red below ###Code sample_image_roi = sample_image.copy() fillcolor=(255,0,0) draw = ImageDraw.Draw(sample_image_roi) points = [(1,76), (1,135), (255,135), (255,76)] for i in range(0, len(points), 1): draw.line([points[i], points[(i+1)%len(points)]], fill=fillcolor, width=3) del draw plt.title('Image with sample ROI') plt.imshow(sample_image_roi) plt.show() ###Output _____no_output_____ ###Markdown **Extracting this ROI will both reduce the training time and the amount of data needed to train the model**. It will also prevent the model from getting confused by focusing on irrelevant features in the environment (e.g. clouds, birds, etc)Another observation we can make is that **the dataset exhibits a vertical flip tolerance**. That is, we get a valid data point if we flip the image around the Y axis if we also flip the sign of the steering angle. This is important as it effectively doubles the number of data points we have available. Additionally, **the trained model should be invariant to changes in lighting conditions**, so we can generate additional data points by globally scaling the brightness of the image.> **Thought Exercise 0.1:** Once you are finished with the tutorial, as an exercise, you should try working with the dataset provided without modifying it using one or more of the 3 changes described above, keeping everything else the same. Do you experience vastly different results? > **Thought Exercise 0.2:**We mentioned in the Readme that end-to-end deep learning eliminates the need for manual feature engineering before feeding the data to the learning algorithm. Would you consider making these pre-processing changes to the dataset as engineered features? Why or why not? Now, let's aggregate all the non-image data into a single dataframe to get some more insights. ###Code full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS] dataframes = [] for folder in full_path_raw_folders: current_dataframe = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t') current_dataframe['Folder'] = folder dataframes.append(current_dataframe) dataset = pd.concat(dataframes, axis=0) print('Number of data points: {0}'.format(dataset.shape[0])) dataset.head() ###Output Number of data points: 46738 ###Markdown Let us first address the naming of the dataset folders. You will notice that we have two types of folders in our dataset - 'normal', and 'swerve'. These names refer to two different driving strategies. Let's begin by attempting to get an understanding of the differences between these two styles of driving. First, we'll plot a portion of datapoints from each of the driving styles against each other. ###Code min_index = 100 max_index = 1100 steering_angles_normal_1 = dataset[dataset['Folder'].apply(lambda v: 'normal_1' in v)]['Steering'][min_index:max_index] steering_angles_swerve_1 = dataset[dataset['Folder'].apply(lambda v: 'swerve_1' in v)]['Steering'][min_index:max_index] plot_index = [i for i in range(min_index, max_index, 1)] fig = plt.figure(figsize=FIGURE_SIZE) ax1 = fig.add_subplot(111) ax1.scatter(plot_index, steering_angles_normal_1, c='b', marker='o', label='normal_1') ax1.scatter(plot_index, steering_angles_swerve_1, c='r', marker='o', label='swerve_1') plt.legend(loc='upper left'); plt.title('Steering Angles for normal_1 and swerve_1 runs') plt.xlabel('Time') plt.ylabel('Steering Angle') plt.show() ###Output _____no_output_____ ###Markdown We can observe a clear difference between the two driving strategies here. The blue points show the normal driving strategy, which as you would expect, keeps your steering angle close to zero more or less, which makes your car mostly go straight on the road. The swerving driving strategy has the car almost oscillating side to side across the road. This illustrates a very important thing to keep in mind while training end-to-end deep learning models. Since we are not doing any feature engineering, our model relies almost entirely on the dataset to provide it with all the necessary information it would need during recall. Hence, to account for any sharp turns the model might encounter, and to give it the ability to correct itself if it starts to go off the road, we need to provide it with enough such examples while training. Hence, we created these extra datasets to focus on those scenarios. Once you are done with the tutorial, you can try re-running everything using only the 'normal' dataset and watch your car fail to keep on the road for an extended period of time.> **Thought Exercise 0.3:**What other such data collection techniques do you think might be necessary for this steering angle prediction scenario? What about autonomous driving in general?Now, let us take a look at the number of datapoints in each category. ###Code dataset['Is Swerve'] = dataset.apply(lambda r: 'swerve' in r['Folder'], axis=1) grouped = dataset.groupby(by=['Is Swerve']).size().reset_index() grouped.columns = ['Is Swerve', 'Count'] def make_autopct(values): def my_autopct(percent): total = sum(values) val = int(round(percent*total/100.0)) return '{0:.2f}% ({1:d})'.format(percent,val) return my_autopct pie_labels = ['Normal', 'Swerve'] fig, ax = plt.subplots(figsize=FIGURE_SIZE) ax.pie(grouped['Count'], labels=pie_labels, autopct = make_autopct(grouped['Count'])) plt.title('Number of data points per driving strategy') plt.show() ###Output _____no_output_____ ###Markdown So, roughly a quarter of the data points are collected with the swerving driving strategy, and the rest are collected with the normal strategy. We also see that we have almost 47,000 data points to work with. This is nearly not enough data, hence our network cannot be too deep. > **Thought Exercise 0.4:**Like many things in the field of Machine Learning, the ideal blend of number of datapoints in each category here is something that is problem specific, and can only be optimized by trial and error. Can you find a split that works better than ours?Let's see what the distribution of labels looks like for the two strategies. ###Code bins = np.arange(-1, 1.05, 0.05) normal_labels = dataset[dataset['Is Swerve'] == False]['Steering'] swerve_labels = dataset[dataset['Is Swerve'] == True]['Steering'] def steering_histogram(hist_labels, title, color): plt.figure(figsize=FIGURE_SIZE) n, b, p = plt.hist(hist_labels.as_matrix(), bins, normed=1, facecolor=color) plt.xlabel('Steering Angle') plt.ylabel('Normalized Frequency') plt.title(title) plt.show() steering_histogram(normal_labels, 'Normal label distribution', 'g') steering_histogram(swerve_labels, 'Swerve label distribution', 'r') ###Output _____no_output_____ ###Markdown There are a few observations we can make about the data from these plots:* **When driving the car normally, the steering angle is almost always zero**. There is a heavy imbalance and if this portion of the data is not downsampled, the model will always predict zero, and the car will not be able to turn.* When driving the car with the swerve strategy, we get examples of sharp turns that don't appear in the normal strategy dataset. **This validates our reasoning behind collecting this data as explained above.** At this point, we need to combine the raw data into compressed data files suitable for training. Here, we will use .h5 files, as this format is ideal for supporting large datasets without reading everything into memory all at once. It also works seamlessly with Keras.The code for cooking the dataset is straightforward, but long. When it terminates, the final dataset will have 4 parts:* image: a numpy array containing the image data* previous_state: a numpy array containing the last known state of the car. This is a (steering, throttle, brake, speed) tuple* label: a numpy array containing the steering angles that we wish to predict (normalized on the range -1..1)* metadata: a numpy array containing metadata about the files (which folder they came from, etc)The processing may take some time. We will also divide the datasets into train/test/validation datasets. ###Code train_eval_test_split = [0.7, 0.2, 0.1] full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS] Cooking.cook(full_path_raw_folders, COOKED_DATA_DIR, train_eval_test_split) ###Output Reading data from data_raw/normal_1... Reading data from data_raw/normal_2... Reading data from data_raw/normal_3... Reading data from data_raw/normal_4... Reading data from data_raw/normal_5... Reading data from data_raw/normal_6... Reading data from data_raw/swerve_1... Reading data from data_raw/swerve_2... Reading data from data_raw/swerve_3... Processing data_cooked/train.h5... Finished saving data_cooked/train.h5. Processing data_cooked/eval.h5... Finished saving data_cooked/eval.h5. Processing data_cooked/test.h5... Finished saving data_cooked/test.h5.
course_2/course_material/Part_8_Case_Study/S59_L458/Absenteeism Exercise - Logistic Regression_with_comments.ipynb
###Markdown Creating a logistic regression to predict absenteeism Import the relevant libraries ###Code # import the relevant libraries import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Load the data ###Code # load the preprocessed CSV data data_preprocessed = pd.read_csv('Absenteeism_preprocessed.csv') # eyeball the data data_preprocessed.head() ###Output _____no_output_____ ###Markdown Create the targets ###Code # find the median of 'Absenteeism Time in Hours' data_preprocessed['Absenteeism Time in Hours'].median() # create targets for our logistic regression # they have to be categories and we must find a way to say if someone is 'being absent too much' or not # what we've decided to do is to take the median of the dataset as a cut-off line # in this way the dataset will be balanced (there will be roughly equal number of 0s and 1s for the logistic regression) # as balancing is a great problem for ML, this will work great for us # alternatively, if we had more data, we could have found other ways to deal with the issue # for instance, we could have assigned some arbitrary value as a cut-off line, instead of the median # note that what line does is to assign 1 to anyone who has been absent 4 hours or more (more than 3 hours) # that is the equivalent of taking half a day off # initial code from the lecture # targets = np.where(data_preprocessed['Absenteeism Time in Hours'] > 3, 1, 0) # parameterized code targets = np.where(data_preprocessed['Absenteeism Time in Hours'] > data_preprocessed['Absenteeism Time in Hours'].median(), 1, 0) # eyeball the targets targets # create a Series in the original data frame that will contain the targets for the regression data_preprocessed['Excessive Absenteeism'] = targets # check what happened # maybe manually see how the targets were created data_preprocessed.head() ###Output _____no_output_____ ###Markdown A comment on the targets ###Code # check if dataset is balanced (what % of targets are 1s) # targets.sum() will give us the number of 1s that there are # the shape[0] will give us the length of the targets array targets.sum() / targets.shape[0] # create a checkpoint by dropping the unnecessary variables # also drop the variables we 'eliminated' after exploring the weights data_with_targets = data_preprocessed.drop(['Absenteeism Time in Hours','Day of the Week', 'Daily Work Load Average','Distance to Work'],axis=1) # check if the line above is a checkpoint :) # if data_with_targets is data_preprocessed = True, then the two are pointing to the same object # if it is False, then the two variables are completely different and this is in fact a checkpoint data_with_targets is data_preprocessed # check what's inside data_with_targets.head() ###Output _____no_output_____ ###Markdown Select the inputs for the regression ###Code data_with_targets.shape # Selects all rows and all columns until 14 (excluding) data_with_targets.iloc[:,:14] # Selects all rows and all columns but the last one (basically the same operation) data_with_targets.iloc[:,:-1] # Create a variable that will contain the inputs (everything without the targets) unscaled_inputs = data_with_targets.iloc[:,:-1] ###Output _____no_output_____ ###Markdown Standardize the data ###Code # standardize the inputs # standardization is one of the most common preprocessing tools # since data of different magnitude (scale) can be biased towards high values, # we want all inputs to be of similar magnitude # this is a peculiarity of machine learning in general - most (but not all) algorithms do badly with unscaled data # a very useful module we can use is StandardScaler # it has much more capabilities than the straightforward 'preprocessing' method from sklearn.preprocessing import StandardScaler # we will create a variable that will contain the scaling information for this particular dataset # here's the full documentation: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html # define scaler as an object absenteeism_scaler = StandardScaler() # import the libraries needed to create the Custom Scaler # note that all of them are a part of the sklearn package # moreover, one of them is actually the StandardScaler module, # so you can imagine that the Custom Scaler is build on it from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler # create the Custom Scaler class class CustomScaler(BaseEstimator,TransformerMixin): # init or what information we need to declare a CustomScaler object # and what is calculated/declared as we do def __init__(self,columns,copy=True,with_mean=True,with_std=True): # scaler is nothing but a Standard Scaler object self.scaler = StandardScaler(copy,with_mean,with_std) # with some columns 'twist' self.columns = columns self.mean_ = None self.var_ = None # the fit method, which, again based on StandardScale def fit(self, X, y=None): self.scaler.fit(X[self.columns], y) self.mean_ = np.mean(X[self.columns]) self.var_ = np.var(X[self.columns]) return self # the transform method which does the actual scaling def transform(self, X, y=None, copy=None): # record the initial order of the columns init_col_order = X.columns # scale all features that you chose when creating the instance of the class X_scaled = pd.DataFrame(self.scaler.transform(X[self.columns]), columns=self.columns) # declare a variable containing all information that was not scaled X_not_scaled = X.loc[:,~X.columns.isin(self.columns)] # return a data frame which contains all scaled features and all 'not scaled' features # use the original order (that you recorded in the beginning) return pd.concat([X_not_scaled, X_scaled], axis=1)[init_col_order] # check what are all columns that we've got unscaled_inputs.columns.values # choose the columns to scale # we later augmented this code and put it in comments # columns_to_scale = ['Month Value','Day of the Week', 'Transportation Expense', 'Distance to Work', #'Age', 'Daily Work Load Average', 'Body Mass Index', 'Children', 'Pet'] # select the columns to omit columns_to_omit = ['Reason_1', 'Reason_2', 'Reason_3', 'Reason_4','Education'] # create the columns to scale, based on the columns to omit # use list comprehension to iterate over the list columns_to_scale = [x for x in unscaled_inputs.columns.values if x not in columns_to_omit] # declare a scaler object, specifying the columns you want to scale absenteeism_scaler = CustomScaler(columns_to_scale) # fit the data (calculate mean and standard deviation); they are automatically stored inside the object absenteeism_scaler.fit(unscaled_inputs) # standardizes the data, using the transform method # in the last line, we fitted the data - in other words # we found the internal parameters of a model that will be used to transform data. # transforming applies these parameters to our data # note that when you get new data, you can just call 'scaler' again and transform it in the same way as now scaled_inputs = absenteeism_scaler.transform(unscaled_inputs) # the scaled_inputs are now an ndarray, because sklearn works with ndarrays scaled_inputs # check the shape of the inputs scaled_inputs.shape ###Output _____no_output_____ ###Markdown Split the data into train & test and shuffle Import the relevant module ###Code # import train_test_split so we can split our data into train and test from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown Split ###Code # check how this method works train_test_split(scaled_inputs, targets) # declare 4 variables for the split x_train, x_test, y_train, y_test = train_test_split(scaled_inputs, targets, #train_size = 0.8, test_size = 0.2, random_state = 20) # check the shape of the train inputs and targets print (x_train.shape, y_train.shape) # check the shape of the test inputs and targets print (x_test.shape, y_test.shape) ###Output _____no_output_____ ###Markdown Logistic regression with sklearn ###Code # import the LogReg model from sklearn from sklearn.linear_model import LogisticRegression # import the 'metrics' module, which includes important metrics we may want to use from sklearn import metrics ###Output _____no_output_____ ###Markdown Training the model ###Code # create a logistic regression object reg = LogisticRegression() # fit our train inputs # that is basically the whole training part of the machine learning reg.fit(x_train,y_train) # assess the train accuracy of the model reg.score(x_train,y_train) ###Output _____no_output_____ ###Markdown Manually check the accuracy ###Code # find the model outputs according to our model model_outputs = reg.predict(x_train) model_outputs # compare them with the targets y_train # ACTUALLY compare the two variables model_outputs == y_train # find out in how many instances we predicted correctly np.sum((model_outputs==y_train)) # get the total number of instances model_outputs.shape[0] # calculate the accuracy of the model np.sum((model_outputs==y_train)) / model_outputs.shape[0] ###Output _____no_output_____ ###Markdown Finding the intercept and coefficients ###Code # get the intercept (bias) of our model reg.intercept_ # get the coefficients (weights) of our model reg.coef_ # check what were the names of our columns unscaled_inputs.columns.values # save the names of the columns in an ad-hoc variable feature_name = unscaled_inputs.columns.values # use the coefficients from this table (they will be exported later and will be used in Tableau) # transpose the model coefficients (model.coef_) and throws them into a df (a vertical organization, so that they can be # multiplied by certain matrices later) summary_table = pd.DataFrame (columns=['Feature name'], data = feature_name) # add the coefficient values to the summary table summary_table['Coefficient'] = np.transpose(reg.coef_) # display the summary table summary_table # do a little Python trick to move the intercept to the top of the summary table # move all indices by 1 summary_table.index = summary_table.index + 1 # add the intercept at index 0 summary_table.loc[0] = ['Intercept', reg.intercept_[0]] # sort the df by index summary_table = summary_table.sort_index() summary_table ###Output _____no_output_____ ###Markdown Interpreting the coefficients ###Code # create a new Series called: 'Odds ratio' which will show the.. odds ratio of each feature summary_table['Odds_ratio'] = np.exp(summary_table.Coefficient) # display the df summary_table # sort the table according to odds ratio # note that by default, the sort_values method sorts values by 'ascending' summary_table.sort_values('Odds_ratio', ascending=False) ###Output _____no_output_____ ###Markdown Testing the model ###Code # assess the test accuracy of the model reg.score(x_test,y_test) # find the predicted probabilities of each class # the first column shows the probability of a particular observation to be 0, while the second one - to be 1 predicted_proba = reg.predict_proba(x_test) # let's check that out predicted_proba predicted_proba.shape # select ONLY the probabilities referring to 1s predicted_proba[:,1] ###Output _____no_output_____ ###Markdown Save the model ###Code # import the relevant module import pickle # pickle the model file with open('model', 'wb') as file: pickle.dump(reg, file) # pickle the scaler file with open('scaler','wb') as file: pickle.dump(absenteeism_scaler, file) ###Output _____no_output_____
notebooks/sbussmann_data-ultimate.ipynb
###Markdown SummaryDo I sleep less on nights when I play Ultimate frisbee? ###Code import pandas as pd import os import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import nba_py sns.set_context('poster') import plotly.offline as py import plotly.graph_objs as go py.init_notebook_mode(connected=True) data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'sleep_data.csv') df_sleep = pd.read_csv(data_path, index_col='shifted_datetime', parse_dates=True) df_sleep.index += pd.Timedelta(hours=12) sleep_day = df_sleep.resample('1D').sum().fillna(0) data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'activity_data.csv') df_activity = pd.read_csv(data_path, index_col='datetime', parse_dates=True) df_activity.columns toplot = df_activity['minutesVeryActive'] data = [] data.append( go.Scatter( x=toplot.index, y=toplot.values, name='Minutes Very Active' ) ) layout = go.Layout( title="Daily Very Active Minutes", yaxis=dict( title='Minutes' ), ) fig = { 'data': data, 'layout': layout, } py.iplot(fig, filename='DailyVeryActiveMinutes') ###Output _____no_output_____ ###Markdown Lots of variation here. Can I relate this to Ultimate frisbee? Spring hat league started on April 14 and played once a week on Fridays until May 19. Meanwhile, I started playing summer club league on May 2. We play twice a week on Tuesdays and Thursdays with our last game on August 17. ###Code dayofweek = df_activity.index.dayofweek index_summerleague = df_activity.index >= '2017-05-02' df_activity_summer = df_activity[index_summerleague] summer_dayofweek = df_activity_summer.index.dayofweek df_activity_summer['dayofweek'] = summer_dayofweek df_activity_summer.groupby('dayofweek').mean() ###Output _____no_output_____ ###Markdown Monday = 0, Sunday = 6. Saturday, Sunday, and Wednesday stand out as days where I have fewer Very Active minutes, but there is no obvious evidence that Tuesday and Thursday are days where I am running around chasing plastic for 1-2 hours. I suspect that part of the challenge here is that I ride my bike to work every day. It's 15 - 20 minutes each way, so if that time on the bike goes in to the "Very Active" bin according to Fitbit, then it will be mixed in with ultimate frisbee minutes. I might be able to filter out bike rides by looking at the start time of each activity. However, I will need to go back to the Fitbit API to extract that information. ###Code df_activity_summer.groupby('dayofweek').std() ###Output _____no_output_____
KonputaziorakoSarrera-MAT/Gardenkiak/Funtzioak.ipynb
###Markdown FuntzioakFuntzio bat, *azpialgoritmo* (azpiproblema) bat ebazten duen sententzia multzoa da.Problema baten barnean azpiproblemak topatzea, ebatziko duen algoritmoa azpialgoritmoen menpe adieraztearen baliokidea da. * Azpiproblemak **beti** ebazteko **errazagoak** izango dira. * Azpiproblema berdin bat problema ezberdinetan agertu daiteke. Adibide bat: topatu *u* urte bat bisurtea ote den * *Urte bat bisurtea da 400-en multiploa bada edo, 4-ren multiploa bada eta 100-ena ez bada.* bisurte propietatea multiplo propietatearen menpe jar dezakegu. * Demagun existitzen dela `multiploa(a,b)` funtzioa, zeinak `True` bueltatzen duen `a` balioa `b`-ren multiploa denean, eta `False` ez denean. Orduan, ondoko espresioak:```multiploa(u,400)==True | ( multiploa(u,4)==True & multiploa(u,100)=False )```edo egokiago idatzia:```multiploa(u,400) | ( multiploa(u,4) & (not multiploa(u,100)) )````True` bueltatzen du `u` urtea bisurtea denean, eta `False` ez denean. Sor ditzagun bi funtzio, `multiploa(a,b)` eta `bisurtea(u)`. Bi funtzioak ***boolearrak*** izango dira (`True`/`False` bueltatzen dute): ###Code def multiploa(a,b) : """True bueltatzen du a b-ren multiploa denean, False bestela.""" if a % b == 0 : return True else : return False def bisurtea(u) : """True bueltatzen du u bisurtea denean, False bestela.""" if multiploa(u,400) | ( multiploa(u,4) & (not multiploa(u,100)) ) : return True else : return False ###Output _____no_output_____ ###Markdown Froga dezagun zer gertatzen den: ###Code print(multiploa(27,3)) print(multiploa(28,3)) print(2016,bisurtea(2016)) print(2017,bisurtea(2017)) print(1900,bisurtea(1900)) print(2000,bisurtea(2000)) ###Output 2016 True 2017 False 1900 False 2000 True ###Markdown Funtzio baten definizioa```pythondef funtzio_izena(a,b,c...): kode-blokea``` * Funtzioak nahi adina argumentu izan ditzake (0, 1, 2...) * argumentuak funtzioaren aldagai *pribatuak* dira. * Funtzioak **BETI** balio bat bueltatzen du. * `return` sententziak funtzioa amaierazten du. * `return` sententziaren balioa funtzioaren emaitza izango da. * `return` sententziarik ez badago &rarr; `None` bueltatzen du. Funtzio boolearrak * `True`/`False` bueltatzen dute. * Oso erabilkorrak baldintza konplexuak *enkapsulatzeko*. * `multiploa(a,b)` &rarr; `bisurtea(u)` &rarr; `lehena(n)` Funtzio boolearrak IIOndoko bi funtzioak baliokideak dira:```pythondef f1(): ... if baldintza : return True else : return Falsedef f2(): ... return baldintza``` Funtzio boolearrak IIITira, aurrekoa ez zen guztiz zuzena (Python-en edozer gauza jar dezakegu baldintza batetan...). Benetako funtzio baliokideak hauek dira:```pythondef f1(): ... if zerbait : return True else : return Falsedef f2(): ... return bool(zerbait)``` Aurreko `multiploa(a,b)` eta `bisurtea(u)` funtzio boolearrak berridatziz: ###Code def multiploa(a,b) : """True bueltatzen du a b-ren multiploa denean, False bestela.""" return a % b == 0 def bisurtea(u) : """True bueltatzen du u bisurtea denean, False bestela.""" return multiploa(u,400) | ( multiploa(u,4) & (not multiploa(u,100)) ) print(2016,bisurtea(2016)) print(2017,bisurtea(2017)) print(1900,bisurtea(1900)) print(2000,bisurtea(2000)) ###Output 2016 True 2017 False 1900 False 2000 True ###Markdown DokumentazioaFuntzioaren haseran jarri ditugun `"""True bueltatzen du..."""` komentarioak funtzioaren dokumentazio bilakatzen dira: ###Code help(multiploa) help(bisurtea) ###Output Help on function bisurtea in module __main__: bisurtea(u) True bueltatzen du u bisurtea denean, False bestela.
modern_5_tidy.ipynb
###Markdown Reshaping & Tidy Data> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)So, you've sat down to analyze a new dataset.What do you do first?In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.I'm with Hilary on this one, you should make sure your data is tidy.Before you do any plots, filtering, transformations, summary statistics, regressions...Without a tidy dataset, you'll be fighting your tools to get the result you need.With a tidy dataset, it's relatively easy to do all of those.Hadley Wickham kindly summarized tidiness as a dataset where1. Each variable forms a column2. Each observation forms a row3. Each type of observational unit forms a tableAnd today we'll only concern ourselves with the first two.As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer. ###Code %matplotlib inline import os import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt if int(os.environ.get("MODERN_PANDAS_EPUB", 0)): import prep # noqa pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') ###Output _____no_output_____ ###Markdown NBA Data[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.The answer would have been difficult to compute with the raw data.After transforming the dataset to be tidy, we're able to quickly get the answer.We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames. ###Code import datetime testDateFrame = pd.DataFrame({'HomeTeam': ['HOU', 'CHI', 'DAL', 'HOU'], 'AwayTeam' : ['CHI', 'DAL', 'CHI', 'DAL'], 'HomeGameNum': [1, 2, 2, 2], 'AwayGameNum' : [1, 1, 3, 3], 'Date' : [datetime.date(2014,3,11), datetime.date(2014,3,12), datetime.date(2014,3,14), datetime.date(2014,3,15)]}) testDateFrame fp = 'data/nba.csv' if not os.path.exists(fp): tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2016_games.html") games = tables[0] games.to_csv(fp) else: games = pd.read_csv(fp) games.head() ###Output _____no_output_____ ###Markdown Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.As you can see, we have a bit of general munging to do before tidying.Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. ###Code column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) .rename_axis(["game_id", "date"]) .sort_index()) games.head() ###Output _____no_output_____ ###Markdown A quick aside on that last block.- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels). The Question:> **How many days of rest did each team get between each game?**Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations. ###Code tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() ###Output _____no_output_____ ###Markdown The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct: ###Code # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 ###Output _____no_output_____ ###Markdown That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).Let's assign that back into our DataFrame ###Code tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() ###Output _____no_output_____ ###Markdown To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`. ###Code by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) df = pd.concat([games, by_game], axis=1) df.dropna().head() ###Output _____no_output_____ ###Markdown One somewhat subtle point: an "observation" depends on the question being asked.So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post): ###Code sns.set(style='ticks', context='paper') g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2) g.map(sns.barplot, 'variable', 'rest'); ###Output /Users/Rachel/anaconda3/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) /Users/Rachel/anaconda3/lib/python3.6/site-packages/seaborn/axisgrid.py:715: UserWarning: Using the barplot function without specifying `order` is likely to produce an incorrect plot. warnings.warn(warning) ###Markdown An example of a game-level statistic is the distribution of rest differences in games: ###Code df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games'); ###Output _____no_output_____ ###Markdown Or the win percent by rest difference ###Code fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() ###Output _____no_output_____ ###Markdown Stack / UnstackPandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`). ###Code rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() ###Output _____no_output_____ ###Markdown `rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide. ###Code rest.unstack().head() ###Output _____no_output_____ ###Markdown `unstack` moves a level of a MultiIndex (innermost by default) up to the columns.`stack` is the inverse. ###Code rest.unstack().stack() ###Output _____no_output_____ ###Markdown With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column. ###Code with sns.color_palette() as pal: b, g = pal.as_hex()[:2] ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean() .plot(figsize=(16, 10), linewidth=30, legend=False)) ax.set(ylabel='Rest (7 day MA)') ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14) sns.despine() ###Output _____no_output_____ ###Markdown The most conenient form will depend on exactly what you're doing.When interacting with databases you'll often deal with long form data.Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two. Mini Project: Home Court Advantage?We've gone to all that work tidying our dataset, let's put it to use.What's the effect (in terms of probability to win) of beingthe home team? Step 1: Create an outcome variableWe need to create an indicator for whether the home team won.Add it as a column called `home_win` in `games`. ###Code df['home_win'] = df.home_points > df.away_points df ###Output _____no_output_____ ###Markdown Step 2: Find the win percent for each teamIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.It'd be better to use some kind of independent measure of team strength, but this will do for now.We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created. ###Code wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg(['sum', 'count', 'mean']) .rename(columns=dict(sum='n_wins', count='n_games', mean='win_pct')) ) wins.head() ###Output _____no_output_____ ###Markdown Pause for visualiztion, because why not ###Code # My viz sns.catplot(kind = 'bar', data = wins.reset_index(), x = 'is_home', y = 'win_pct', order = ['home_team', 'away_team']); # My viz sns.catplot(kind = 'point', data = wins.reset_index(), x = 'is_home', y = 'win_pct', col = 'team', col_wrap = 5, height = 2, order = ['home_team', 'away_team'], color = 'k'); g = sns.FacetGrid(wins.reset_index(), hue='team', size=7, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1)); ###Output /Users/Rachel/anaconda3/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) /Users/Rachel/anaconda3/lib/python3.6/site-packages/seaborn/axisgrid.py:715: UserWarning: Using the pointplot function without specifying `order` is likely to produce an incorrect plot. warnings.warn(warning) ###Markdown (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). ###Code g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2) g.map(sns.pointplot, 'is_home', 'win_pct') ###Output _____no_output_____ ###Markdown Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.Let's aggregate over home / away to get an overall win percent per team. ###Code # My solution overall = ( wins.groupby('team') .agg({'n_wins': sum, 'n_games': sum}) .assign(win_pct = lambda x: x.n_wins/x.n_games) ) overall.head() win_percent = ( # Use sum(games) / sum(games) instead of mean # since I don't know if teams play the same # number of games at home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() # My solution sns.set(context = 'notebook') sns.catplot(data = overall.reset_index().sort_values('win_pct', ascending = False), x = 'win_pct', y = 'team', kind = 'bar', color = 'k'); win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent") ###Output _____no_output_____ ###Markdown Is there a relationship between overall team strength and their home-court advantage? ###Code rel = ( pd.merge(wins.loc[pd.IndexSlice[:, 'home_team'], 'win_pct'], overall['win_pct'], how = 'outer', left_index = True, right_index = True, suffixes = ('_home', '_overall')) .reset_index(level=1, drop = True) ) rel sns.lmplot(data = rel, x = 'win_pct_home', y = 'win_pct_overall') wins.win_pct.unstack().assign() plt.figure(figsize=(8, 5)) (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() plt.tight_layout() ###Output _____no_output_____ ###Markdown Let's get the team strength back into `df`.You could you `pd.merge`, but I prefer `.map` when joining a `Series`. ###Code df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() ###Output _____no_output_____ ###Markdown The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.With `.assign` we can quickly explore variations in formula. ###Code (sm.Logit.from_formula('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() ###Output _____no_output_____ ###Markdown Reshaping & Tidy Data> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)So, you've sat down to analyze a new dataset.What do you do first?In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.I'm with Hilary on this one, you should make sure your data is tidy.Before you do any plots, filtering, transformations, summary statistics, regressions...Without a tidy dataset, you'll be fighting your tools to get the result you need.With a tidy dataset, it's relatively easy to do all of those.Hadley Wickham kindly summarized tidiness as a dataset where1. Each variable forms a column2. Each observation forms a row3. Each type of observational unit forms a tableAnd today we'll only concern ourselves with the first two.As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer. ###Code %matplotlib inline import os import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import lxml as lxml if int(os.environ.get("MODERN_PANDAS_EPUB", 0)): import prep # noqa pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') ###Output _____no_output_____ ###Markdown NBA Data[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.The answer would have been difficult to compute with the raw data.After transforming the dataset to be tidy, we're able to quickly get the answer.We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames. ###Code fp = 'data/nba.csv' if not os.path.exists(fp): tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2016_games.html") games = tables[0] games.to_csv(fp) else: games = pd.read_csv(fp) games.head() ###Output _____no_output_____ ###Markdown Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.As you can see, we have a bit of general munging to do before tidying.Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. ###Code column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) # keep original index, which represent a game .rename_axis(["game_id", "date"]) .sort_index()) games.head() ###Output _____no_output_____ ###Markdown A quick aside on that last block.- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels). The Question:> **How many days of rest did each team get between each game?**Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations. ###Code games # my take pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], var_name='team') tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() ###Output _____no_output_____ ###Markdown The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct: ###Code # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 ###Output _____no_output_____ ###Markdown That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).Let's assign that back into our DataFrame ###Code ?tidy.date.diff tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() ###Output _____no_output_____ ###Markdown To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`. ###Code by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) by_game df = pd.concat([games, by_game], axis=1) df.dropna().head() ###Output _____no_output_____ ###Markdown One somewhat subtle point: an "observation" depends on the question being asked.So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post): ###Code sns.set(style='ticks', context='paper') g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2) g.map(sns.barplot, 'variable', 'rest'); ###Output _____no_output_____ ###Markdown An example of a game-level statistic is the distribution of rest differences in games: ###Code df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta.value_counts() delta.value_counts().reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) delta.value_counts().sort_index() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() # sort by descending count value .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games'); ###Output _____no_output_____ ###Markdown Or the win percent by rest difference ###Code fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() ###Output _____no_output_____ ###Markdown Stack / UnstackFor MultiIndex (low cardinality), Pandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`). ###Code rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() ###Output _____no_output_____ ###Markdown `rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide. ###Code rest.unstack().head() ###Output _____no_output_____ ###Markdown `unstack` moves a level of a MultiIndex (innermost by default) up to the columns.`stack` is the inverse. ###Code rest.unstack().stack() ###Output _____no_output_____ ###Markdown With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column. ###Code with sns.color_palette() as pal: b, g = pal.as_hex()[:2] ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean() .plot(figsize=(12, 6), linewidth=3, legend=False)) ax.set(ylabel='Rest (7 day MA)') ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14) sns.despine() ###Output _____no_output_____ ###Markdown The most conenient form will depend on exactly what you're doing.When interacting with databases you'll often deal with long form data.Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two. Mini Project: Home Court Advantage?We've gone to all that work tidying our dataset, let's put it to use.What's the effect (in terms of probability to win) of beingthe home team? Step 1: Create an outcome variableWe need to create an indicator for whether the home team won.Add it as a column called `home_win` in `games`. ###Code df['home_win'] = df.home_points > df.away_points ###Output _____no_output_____ ###Markdown Step 2: Find the win percent for each teamIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.It'd be better to use some kind of independent measure of team strength, but this will do for now.We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created. ###Code df.head() wins = (pd.melt(df.reset_index(), id_vars=["game_id", "date", "home_win"], value_vars=['away_team', 'home_team'], value_name='team', var_name='is_home') .assign(win = lambda x: x.home_win == (x.is_home=='home_team')) .groupby(["team", "is_home"]) .agg( n_win = ("win", "sum"), n_games = ("win", "count"), win_pct = ("win", "mean") ) ) wins wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg(['sum', 'count', 'mean']) .rename(columns=dict(sum='n_wins', count='n_games', mean='win_pct')) ) wins.head() ###Output _____no_output_____ ###Markdown Pause for visualiztion, because why not ###Code g = sns.FacetGrid(wins.reset_index(), hue='team', height=7, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct', order=['away_team', 'home_team']).set(ylim=(0, 1)); ###Output _____no_output_____ ###Markdown (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). ###Code g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2) g.map(sns.pointplot, 'is_home', 'win_pct') ###Output /Applications/anaconda3/envs/jupyterlab/lib/python3.7/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.Let's aggregate over home / away to get an overall win percent per team. ###Code win_percent = ( # Use sum(games) / sum(games) instead of mean # since I don't know if teams play the same # number of games at home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent") ###Output _____no_output_____ ###Markdown Is there a relationship between overall team strength and their home-court advantage? ###Code wins.win_pct.unstack() plt.figure(figsize=(8, 5)) (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() plt.tight_layout() ###Output _____no_output_____ ###Markdown Let's get the team strength back into `df`.You could you `pd.merge`, but I prefer `.map` when joining a `Series`. ###Code win_percent df['away_team'] # if index matches existing value, replace existing value with new value df['away_team'].map(win_percent) tmp1=pd.merge(df, win_percent.rename("away_strength", axis=1).reset_index(), left_on='away_team', right_on='team') tmp2=pd.merge(tmp1, win_percent.rename("home_strength", axis=1).reset_index(), left_on='home_team', right_on='team') df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.552792 Iterations 6 ###Markdown The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.With `.assign` we can quickly explore variations in formula. ###Code (sm.Logit.from_formula('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.676549 Iterations 4 ###Markdown Reshaping & Tidy Data> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)So, you've sat down to analyze a new dataset.What do you do first?In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.I'm with Hilary on this one, you should make sure your data is tidy.Before you do any plots, filtering, transformations, summary statistics, regressions...Without a tidy dataset, you'll be fighting your tools to get the result you need.With a tidy dataset, it's relatively easy to do all of those.Hadley Wickham kindly summarized tidiness as a dataset where1. Each variable forms a column2. Each observation forms a row3. Each type of observational unit forms a tableAnd today we'll only concern ourselves with the first two.As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer. ###Code %matplotlib inline import os import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt if int(os.environ.get("MODERN_PANDAS_EPUB", 0)): import prep # noqa pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') ###Output _____no_output_____ ###Markdown NBA Data[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.The answer would have been difficult to compute with the raw data.After transforming the dataset to be tidy, we're able to quickly get the answer.We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames. ###Code fp = 'data/nba.csv' if not os.path.exists(fp): tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2016_games.html") games = tables[0] games.to_csv(fp) else: games = pd.read_csv(fp) games.head() ###Output _____no_output_____ ###Markdown Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.As you can see, we have a bit of general munging to do before tidying.Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. ###Code column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) .rename_axis(["game_id", "date"]) .sort_index()) games.head() ###Output _____no_output_____ ###Markdown A quick aside on that last block.- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels). The Question:> **How many days of rest did each team get between each game?**Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations. ###Code tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() ###Output _____no_output_____ ###Markdown The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct: ###Code # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 ###Output _____no_output_____ ###Markdown That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).Let's assign that back into our DataFrame ###Code tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() ###Output _____no_output_____ ###Markdown To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`. ###Code by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) df = pd.concat([games, by_game], axis=1) df.dropna().head() ###Output _____no_output_____ ###Markdown One somewhat subtle point: an "observation" depends on the question being asked.So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post): ###Code sns.set(style='ticks', context='paper') g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2) g.map(sns.barplot, 'variable', 'rest'); ###Output _____no_output_____ ###Markdown An example of a game-level statistic is the distribution of rest differences in games: ###Code df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games'); ###Output _____no_output_____ ###Markdown Or the win percent by rest difference ###Code fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() ###Output _____no_output_____ ###Markdown Stack / UnstackPandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`). ###Code rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() ###Output _____no_output_____ ###Markdown `rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide. ###Code rest.unstack().head() ###Output _____no_output_____ ###Markdown `unstack` moves a level of a MultiIndex (innermost by default) up to the columns.`stack` is the inverse. ###Code rest.unstack().stack() ###Output _____no_output_____ ###Markdown With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column. ###Code with sns.color_palette() as pal: b, g = pal.as_hex()[:2] ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean() .plot(figsize=(12, 6), linewidth=3, legend=False)) ax.set(ylabel='Rest (7 day MA)') ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14) sns.despine() ###Output _____no_output_____ ###Markdown The most conenient form will depend on exactly what you're doing.When interacting with databases you'll often deal with long form data.Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two. Mini Project: Home Court Advantage?We've gone to all that work tidying our dataset, let's put it to use.What's the effect (in terms of probability to win) of beingthe home team? Step 1: Create an outcome variableWe need to create an indicator for whether the home team won.Add it as a column called `home_win` in `games`. ###Code df['home_win'] = df.home_points > df.away_points ###Output _____no_output_____ ###Markdown Step 2: Find the win percent for each teamIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.It'd be better to use some kind of independent measure of team strength, but this will do for now.We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created. ###Code wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg(['sum', 'count', 'mean']) .rename(columns=dict(sum='n_wins', count='n_games', mean='win_pct')) ) wins.head() ###Output _____no_output_____ ###Markdown Pause for visualiztion, because why not ###Code g = sns.FacetGrid(wins.reset_index(), hue='team', size=7, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1)); ###Output _____no_output_____ ###Markdown (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). ###Code g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2) g.map(sns.pointplot, 'is_home', 'win_pct') ###Output _____no_output_____ ###Markdown Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.Let's aggregate over home / away to get an overall win percent per team. ###Code win_percent = ( # Use sum(games) / sum(games) instead of mean # since I don't know if teams play the same # number of games at home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent") ###Output _____no_output_____ ###Markdown Is there a relationship between overall team strength and their home-court advantage? ###Code plt.figure(figsize=(8, 5)) (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() plt.tight_layout() ###Output _____no_output_____ ###Markdown Let's get the team strength back into `df`.You could you `pd.merge`, but I prefer `.map` when joining a `Series`. ###Code df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.552792 Iterations 6 ###Markdown The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.With `.assign` we can quickly explore variations in formula. ###Code (sm.Logit.from_formula('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.676549 Iterations 4 ###Markdown Reshaping & Tidy Data> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)So, you've sat down to analyze a new dataset.What do you do first?In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.I'm with Hilary on this one, you should make sure your data is tidy.Before you do any plots, filtering, transformations, summary statistics, regressions...Without a tidy dataset, you'll be fighting your tools to get the result you need.With a tidy dataset, it's relatively easy to do all of those.Hadley Wickham kindly summarized tidiness as a dataset where1. Each variable forms a column2. Each observation forms a row3. Each type of observational unit forms a tableAnd today we'll only concern ourselves with the first two.As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer. ###Code %matplotlib inline import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import time pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') ###Output _____no_output_____ ###Markdown NBA Data[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.The answer would have been difficult to compute with the raw data.After transforming the dataset to be tidy, we're able to quickly get the answer.We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames. ###Code def download_data(month): tables = pd.read_html(f"https://www.basketball-reference.com/leagues/NBA_2016_games-{month}.html") table = tables[0] return table[table['Date'] != 'Playoffs'] fp = 'data/nba.csv' months = ["november", "december", "january", "february", "march", "april"] if not os.path.exists(fp): dfs = [download_data(month) for month in months] games = pd.concat(dfs, axis=0) games.to_csv(fp) else: games = pd.read_csv(fp, index_col=0) games.head() ###Output _____no_output_____ ###Markdown Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.As you can see, we have a bit of general munging to do before tidying.Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. ###Code column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(away_points=lambda x: pd.to_numeric(x["away_points"])) .assign(home_points=lambda x: pd.to_numeric(x["home_points"])) .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) .rename_axis(["game_id", "date"]) .sort_index()) games.head() ###Output _____no_output_____ ###Markdown A quick aside on that last block.- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels). The Question:> **How many days of rest did each team get between each game?**Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations. ###Code tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() ###Output _____no_output_____ ###Markdown The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct: ###Code # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 ###Output _____no_output_____ ###Markdown That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).Let's assign that back into our DataFrame ###Code tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() ###Output _____no_output_____ ###Markdown To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`. ###Code by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) df = pd.concat([games, by_game], axis=1) df.dropna().head() by_game ###Output _____no_output_____ ###Markdown One somewhat subtle point: an "observation" depends on the question being asked.So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post): ###Code sns.set(style='ticks', context='paper') g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', height=2) g.map(sns.barplot, 'variable', 'rest') ###Output _____no_output_____ ###Markdown An example of a game-level statistic is the distribution of rest differences in games: ###Code df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games'); ###Output _____no_output_____ ###Markdown Or the win percent by rest difference ###Code fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() ###Output _____no_output_____ ###Markdown Stack / UnstackPandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`). ###Code rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() ###Output _____no_output_____ ###Markdown `rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide. ###Code rest.unstack() ###Output _____no_output_____ ###Markdown `unstack` moves a level of a MultiIndex (innermost by default) up to the columns.`stack` is the inverse. ###Code rest.unstack().stack() ###Output _____no_output_____ ###Markdown With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column. ###Code with sns.color_palette() as pal: b, g = pal.as_hex()[:2] ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean() .plot(figsize=(12, 6), linewidth=3, legend=False)) ax.set(ylabel='Rest (7 day MA)') ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14) sns.despine() ###Output _____no_output_____ ###Markdown The most conenient form will depend on exactly what you're doing.When interacting with databases you'll often deal with long form data.Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two. Mini Project: Home Court Advantage?We've gone to all that work tidying our dataset, let's put it to use.What's the effect (in terms of probability to win) of beingthe home team? Step 1: Create an outcome variableWe need to create an indicator for whether the home team won.Add it as a column called `home_win` in `games`. ###Code df['home_win'] = df.home_points > df.away_points ###Output _____no_output_____ ###Markdown Step 2: Find the win percent for each teamIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.It'd be better to use some kind of independent measure of team strength, but this will do for now.We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created. ###Code df.reset_index().head(2) wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg(['sum', 'count', 'mean']) .rename(columns=dict(sum='n_wins', count='n_games', mean='win_pct')) ) wins.head() ###Output _____no_output_____ ###Markdown Pause for visualiztion, because why not ###Code g = sns.FacetGrid(wins.reset_index(), hue='team', height=7, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1)) ###Output _____no_output_____ ###Markdown (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). ###Code g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, height=2) g.map(sns.pointplot, 'is_home', 'win_pct') ###Output _____no_output_____ ###Markdown Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.Let's aggregate over home / away to get an overall win percent per team. ###Code win_percent = ( # Use sum(games) / sum(games) instead of mean # since I don't know if teams play the same # number of games at home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent") ###Output _____no_output_____ ###Markdown Is there a relationship between overall team strength and their home-court advantage? ###Code plt.figure(figsize=(8, 5)) (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() plt.tight_layout() ###Output _____no_output_____ ###Markdown Let's get the team strength back into `df`.You could you `pd.merge`, but I prefer `.map` when joining a `Series`. ###Code df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() ###Output _____no_output_____ ###Markdown The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.With `.assign` we can quickly explore variations in formula. ###Code (sm.logit('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) mod = sm.logit('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() ###Output _____no_output_____ ###Markdown Reshaping & Tidy Data> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)So, you've sat down to analyze a new dataset.What do you do first?In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.I'm with Hilary on this one, you should make sure your data is tidy.Before you do any plots, filtering, transformations, summary statistics, regressions...Without a tidy dataset, you'll be fighting your tools to get the result you need.With a tidy dataset, it's relatively easy to do all of those.Hadley Wickham kindly summarized tidiness as a dataset where1. Each variable forms a column2. Each observation forms a row3. Each type of observational unit forms a tableAnd today we'll only concern ourselves with the first two.As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer. ###Code %matplotlib inline import os import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt if int(os.environ.get("MODERN_PANDAS_EPUB", 0)): import prep # noqa pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') ###Output _____no_output_____ ###Markdown NBA Data[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.The answer would have been difficult to compute with the raw data.After transforming the dataset to be tidy, we're able to quickly get the answer.We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames. ###Code fp = 'data/nba.csv' if not os.path.exists(fp): tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2016_games.html") games = tables[0] games.to_csv(fp) else: games = pd.read_csv(fp, index_col=0) games.head() ###Output _____no_output_____ ###Markdown Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.As you can see, we have a bit of general munging to do before tidying.Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. ###Code column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) .rename_axis(["game_id", "date"]) .sort_index()) games.head() ###Output _____no_output_____ ###Markdown A quick aside on that last block.- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels). The Question:> **How many days of rest did each team get between each game?**Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations. ###Code tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() ###Output _____no_output_____ ###Markdown The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct: ###Code # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 ###Output _____no_output_____ ###Markdown That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).Let's assign that back into our DataFrame ###Code tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() ###Output _____no_output_____ ###Markdown To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`. ###Code by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) df = pd.concat([games, by_game], axis=1) df.dropna().head() ###Output _____no_output_____ ###Markdown One somewhat subtle point: an "observation" depends on the question being asked.So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post): ###Code sns.set(style='ticks', context='paper') g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2) g.map(sns.barplot, 'variable', 'rest'); ###Output _____no_output_____ ###Markdown An example of a game-level statistic is the distribution of rest differences in games: ###Code df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games'); ###Output _____no_output_____ ###Markdown Or the win percent by rest difference ###Code fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() ###Output _____no_output_____ ###Markdown Stack / UnstackPandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`). ###Code rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() ###Output _____no_output_____ ###Markdown `rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide. ###Code rest.unstack().head() ###Output _____no_output_____ ###Markdown `unstack` moves a level of a MultiIndex (innermost by default) up to the columns.`stack` is the inverse. ###Code rest.unstack().stack() ###Output _____no_output_____ ###Markdown With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column. ###Code with sns.color_palette() as pal: b, g = pal.as_hex()[:2] ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean() .plot(figsize=(12, 6), linewidth=3, legend=False)) ax.set(ylabel='Rest (7 day MA)') ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14) sns.despine() ###Output _____no_output_____ ###Markdown The most conenient form will depend on exactly what you're doing.When interacting with databases you'll often deal with long form data.Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two. Mini Project: Home Court Advantage?We've gone to all that work tidying our dataset, let's put it to use.What's the effect (in terms of probability to win) of beingthe home team? Step 1: Create an outcome variableWe need to create an indicator for whether the home team won.Add it as a column called `home_win` in `games`. ###Code df['home_win'] = df.home_points > df.away_points ###Output _____no_output_____ ###Markdown Step 2: Find the win percent for each teamIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.It'd be better to use some kind of independent measure of team strength, but this will do for now.We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created. ###Code wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg({'n_wins': 'sum', 'n_games': 'count', 'win_pct': 'mean'}) ) wins.head() ###Output _____no_output_____ ###Markdown Pause for visualiztion, because why not ###Code g = sns.FacetGrid(wins.reset_index(), hue='team', size=7, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1)); ###Output _____no_output_____ ###Markdown (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). ###Code g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2) g.map(sns.pointplot, 'is_home', 'win_pct') ###Output _____no_output_____ ###Markdown Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.Let's aggregate over home / away to get an overall win percent per team. ###Code win_percent = ( # Use sum(games) / sum(games) instead of mean # since I don't know if teams play the same # number of games at home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent") ###Output _____no_output_____ ###Markdown Is there a relationship between overall team strength and their home-court advantage? ###Code plt.figure(figsize=(8, 5)) (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() plt.tight_layout() ###Output _____no_output_____ ###Markdown Let's get the team strength back into `df`.You could you `pd.merge`, but I prefer `.map` when joining a `Series`. ###Code df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.552792 Iterations 6 ###Markdown The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.With `.assign` we can quickly explore variations in formula. ###Code (sm.Logit.from_formula('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() ###Output Optimization terminated successfully. Current function value: 0.676549 Iterations 4
1. Data Preparation.ipynb
###Markdown Correlation function of DR72 SDSS VAGC Catalog First import all the modules such as healpy and astropy needed for analyzing the structure ###Code import healpix_util as hu import astropy as ap import numpy as np from astropy.io import fits from astropy.table import Table import astropy.io.ascii as ascii from astropy.io import fits from astropy.constants import c import matplotlib.pyplot as plt import math as m from math import pi import scipy.special as sp from scipy import integrate import warnings from sklearn.neighbors import BallTree import pickle import pymangle from scipy.optimize import curve_fit %matplotlib inline dr7full=ascii.read("./input/DR7-Full.ascii") dr7full z=dr7full['col3'] rad=dr7full['col1'] decd=dr7full['col2'] #Ez = lambda x: 1.0/m.sqrt(0.3*(1+x)**3+0.7) Om=0.3 Ol=0.7 Ok=0.0 def Ez(zv): return 1.0/m.sqrt(Om*(1.0+zv)**3+Ok*(1.0+zv)**2+Ol) np.vectorize(Ez) #Calculate comoving distance of a data point using the Redshift - This definition is based on the cosmology model we take. Here the distance for E-dS universe is considered. Also note that c/H0 ratio is cancelled in the equations and hence not taken. def DC_LCDM(z): return integrate.quad(Ez, 0, z)[0] DC_LCDM=np.vectorize(DC_LCDM) DC_LCDM(2.0) DC=DC_LCDM(z) DC dr7f = open("./output/DR72srarf.dat",'w') dr7f.write("z\t ra\t dec\t s\t rar\t decr \n") for i in range(0,len(dr7full)): dr7f.write("%f\t " %z[i]) dr7f.write("%f\t %f\t " %(rad[i],decd[i])) dr7f.write("%f\t " %DC[i]) dr7f.write("%f\t %f\n " %(rad[i]*pi/180.0,decd[i]*pi/180.0)) dr7f.close() data=ascii.read("./output/DR72srarf.dat") data['z'] data['s'] data['rar'] data['decr'] NSIDE=512 dr72hpix=hu.HealPix("ring",NSIDE) pixdata = open("./output/pixdatadr72VAGCfull.dat",'w') pixdata.write("z\t pix \n") for i in range(0,len(data)): pixdata.write("%f\t" %data['z'][i]) pixdata.write("%d\n" %dr72hpix.eq2pix(data['ra'][i],data['dec'][i])) pixdata.close() pixdata = ascii.read("./output/pixdatadr72VAGCfull.dat") hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE))) for j in range(len(pixdata)): hpixdata[pixdata[j]['pix']]+=1 hpixdata hu.mollview(hpixdata,rot=180) mangle=pymangle.Mangle("./masks/window.dr72safe0.ply") ###Output _____no_output_____ ###Markdown Ref: https://pypi.python.org/pypi/pymangle/ ###Code %%time rar,decr=mangle.genrand(2*len(data)) rar decr zr=np.array([data['z'],data['z']]) zr zr=zr.flatten() zr print len(zr) print len(dec) datR=ascii.read("./output/rand200kdr72.dat") ra=datR['ra'] dec=datR['dec'] DCr=DC_LCDM(zr) rdr7f = open("./output/rDR72srarf.dat",'w') rdr7f.write("z\t ra\t dec\t s\t rar\t decr \n") for i in range(0,len(zr)-1): rdr7f.write("%f\t " %zr[i]) rdr7f.write("%f\t %f\t " %(ra[i],dec[i])) rdr7f.write("%f\t " %DCr[i]) rdr7f.write("%f\t %f\n " %(ra[i]*pi/180.0,dec[i]*pi/180.0)) rdr7f.close() dataR=ascii.read("./output/rDR72srarf.dat") dataR['z'] NSIDE=512 rdr72hpix=hu.HealPix("ring",NSIDE) pixdata = open("./output/pixrand200kdr72.dat",'w') pixdata.write("z\t pix \n") for i in range(0,len(ra)): pixdata.write("%f\t" %zr[i]) pixdata.write("%d\n" %rdr72hpix.eq2pix(ra[i],dec[i])) pixdata.close() pixdata = ascii.read("./output/pixrand200kdr72.dat") hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE))) for j in range(len(pixdata)): hpixdata[pixdata[j]['pix']]+=1 hpixdata hu.mollview(hpixdata,rot=180) plt.savefig("./plots/rand200kmnew.pdf") from scipy.stats import norm from sklearn.neighbors import KernelDensity from lcdmmetric import * z=np.array(data['z']) zkde=z.reshape(1,-1) kde = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(zkde) kde X_plot = np.arange(z.min(), z.max(), z.size())[:, np.newaxis] log_dens = kde.score_samples(zkde) log_dens d=ascii.read("./output/DR72LCsrarf.dat") d dataR=ascii.read("./output/rand200kdr72.dat") dataR['z'] dataR['ra'] dataR['dec'] DCLCR=DC_LC(dataR['z']) rdr7f = open("./output/rDR7200kLCsrarf.dat",'w') rdr7f.write("z\t ra\t dec\t s\t rar\t decr \n") for i in range(0,len(dataR)): rdr7f.write("%f\t " %dataR['z'][i]) rdr7f.write("%f\t %f\t " %(dataR['ra'][i],dataR['dec'][i])) rdr7f.write("%f\t " %DCLCR[i]) rdr7f.write("%f\t %f\n " %(dataR['ra'][i]*pi/180.0,dataR['dec'][i]*pi/180.0)) rdr7f.close() r=ascii.read("./output/rDR7200kLCsrarf.dat") r dr7fdat=ascii.read("./output/DR7srarf.dat") dr7fdat['s'][1:300] #fdata=fits.open("/Users/rohin/Downloads/DR7-Full.fits") #fdata.writeto("./output/DR7fulltrim.fits") fdata=fits.open("./output/DR7fulltrim.fits") cols=fdata[1].columns cols.del_col('ZTYPE') cols.del_col('SECTOR') cols.del_col('FGOTMAIN') cols.del_col('QUALITY') cols.del_col('ISBAD') cols.del_col('M') cols.del_col('MMAX') cols.del_col('ILSS') cols.del_col('ICOMB') cols.del_col('VAGC_SELECT') cols.del_col('LSS_INDEX') cols.del_col('FIBERWEIGHT') cols.del_col('PRIMTARGET') cols.del_col('MG') cols.del_col('SECTOR_COMPLETENESS') cols.del_col('COMOV_DENSITY') cols.del_col('RADIAL_WEIGHT') fdata[1].columns fdata.writeto("./output/DR7fullzradec.fits") fdat=fits.open("./output/DR7fullzradec.fits") fdat[1].columns fdat[1].data['Z'] fdat[1].data['RA'] comovlcdm=DC_LCDM(fdat[1].data['Z']) fdat[1].data['Z'] comovlcdm comovlcdm.dtype #cols=fdat[1].columns nc=fits.Column(name='COMOV',format='D',array=comovlcdm) nc1=fits.Column(name='COMOV',format='D') fdata[1].data['Z'] fdata[1].data['RA'] nc nc.dtype #cols.add_col(nc) fdat[1].columns fdat[1].columns.info() fdat[1].columns.add_col(nc1) fdat[1].data['COMOV']=comovlcdm comovlcdm fdat[1].data['Z'] fdat[1].data['COMOV'] fdat[1].data['RA'] fdat[1].data['RA']=fdat[1].data['RA']*pi/180.0 comovlcdm=DC_LCDM(fdat[1].data['Z']) comovlcdm ###Output _____no_output_____ ###Markdown Random catalog created based on the survey limitations also taken from http://cosmo.nyu.edu/~eak306/SDSS-LRG.html ###Code dataR=fits.open("/Users/rohin/Downloads/random-DR7-Full.fits") dataR dataR=dataR[1].data len(dataR) NSIDE=512 dr72hpix=hu.HealPix("ring",NSIDE) pixdata = open("./output/pixdatadr72VAGCfullrand.dat",'w') pixdata.write("z\t pix \n") for i in range(0,len(data)-1): pixdata.write("%f\t" %data['z'][i]) pixdata.write("%d\n" %dr72hpix.eq2pix(dataR['ra'][i],dataR['dec'][i])) pixdata.close() pixdata = ascii.read("./output/pixdatadr72VAGCfullrand.dat") hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE))) for j in range(len(pixdata)): hpixdata[pixdata[j]['pix']]+=1 hpixdata hu.mollview(hpixdata,rot=180) hu.orthview(hpixdata) ###Output _____no_output_____ ###Markdown I. Data Preparation (equal bin)The goal of this dataset is to clean the data set we'll use for data visualizations and training the model.We'll proceed to:- Data Exploration and data split- Feature Engineering (categorical encoding using equal bin discretiser and top/bottom encoding)- Save data for feature predictions Part I : Data Exploratoin 1. Importing libraries and data ###Code # to handle datasets import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt %matplotlib inline # to divide train and test set from sklearn.model_selection import train_test_split # feature scaling from sklearn.preprocessing import MinMaxScaler # for tree binarisation from sklearn.tree import DecisionTreeRegressor from sklearn.model_selection import cross_val_score # to build the models from sklearn.linear_model import LinearRegression, Lasso from sklearn.ensemble import RandomForestRegressor from sklearn.svm import SVR import xgboost as xgb # to evaluate the models from sklearn.metrics import mean_squared_error from math import sqrt pd.pandas.set_option('display.max_columns', None) import warnings warnings.filterwarnings('ignore') #Scaling from sklearn.preprocessing import RobustScaler from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler ###Output _____no_output_____ ###Markdown 2. Import and merge datasets ###Code df=pd.read_csv('Credit-Scoring-Clean.csv') df.shape #column list df.columns ###Output _____no_output_____ ###Markdown 3. Drop duplicate rows and columns ###Code #drop duplicate rows (using subset, I drop raws where values of columns mentioned match) df.drop_duplicates(subset=['CheckingAcctStat', 'Duration', 'CreditHistory', 'Purpose', 'CreditAmount', 'Savings', 'Employment', 'InstallmentRatePecnt', 'SexAndStatus', 'OtherDetorsGuarantors', 'PresentResidenceTime', 'Property', 'Age', 'OtherInstalments', 'Housing', 'ExistingCreditsAtBank', 'Job', 'NumberDependents', 'Telephone', 'ForeignWorker', 'CreditStatus'], inplace = True) #see shape after duplicate row removal df.shape ###Output _____no_output_____ ###Markdown There were no duplicate rows ###Code # remove duplicate columns _, i = np.unique(df.columns, return_index=True) df=df.iloc[:, i] df.shape df.head() ###Output _____no_output_____ ###Markdown There were no duplicate columns 4. Types of Variables ###Code # let's inspect the type of variables in pandas df.dtypes ###Output _____no_output_____ ###Markdown There are a mixture of numerical and categorical variables. Normally object type determines categorical. 4.1 Find categorical variables ###Code # find categorical variables categorical = [var for var in df.columns if df[var].dtype=='O'] print('There are {} categorical variables'.format(len(categorical))) categorical ###Output There are 13 categorical variables ###Markdown 4.2 Find temporal variables ###Code # make a list of the numerical variables first numerical = [var for var in df.columns if df[var].dtype!='O'] # list of variables that contain year information year_vars = [var for var in numerical if 'Yr' in var or 'Year' in var or 'Day'in var or 'Month'in var or 'Time'in var] print('There are {} temporal variables'.format(len(year_vars))) year_vars ###Output There are 1 temporal variables ###Markdown No temporal variables 4.3 Find Discrete Varibles ###Code # let's visualise the values of the discrete variables discrete = [] for var in numerical: if len(df[var].unique())<20 and var not in year_vars: print(var, ' values: ', df[var].unique()) discrete.append(var) print() print('There are {} discrete variables'.format(len(discrete))) discrete ### Find Continuous # let's remember to skip the Id variable and the target variable SalePrice, which are both also numerical numerical = [var for var in numerical if var not in discrete and var not in ['Id', 'SalePrice'] and var not in year_vars] print('There are {} numerical and continuous variables'.format(len(numerical))) numerical ###Output There are 4 numerical and continuous variables ###Markdown SO there are: - 3 discrete variables - 13 categorical - 1 temporal- 4 continuous 5.Types of problems within variables 5.1 Missing values I have very few missing values in general, except for the 'normalized-losses'and 'make'columns ###Code # let's no determine how many variables we have with missing information vars_with_na = [var for var in df.columns if df[var].isnull().sum()>0] print('Total variables that contain missing information: ', len(vars_with_na)) ###Output Total variables that contain missing information: 0 ###Markdown There are a no variables with missing information! ###Code # let's inspect the type of those variables with a lot of missing information for var in df.columns: if df[var].isnull().mean()>0.80: print(var, df[var].unique()) # let's visualise the percentage of missing values for each variable for var in df.columns: if df[var].isnull().sum()>0: print(var, df[var].isnull().mean()) ###Output _____no_output_____ ###Markdown No variables with any above 80% of values missing information 5.2 Outliers 5.2.1 Start with numerical ###Code # let's look at the continuous variables numerical # let's make boxplots to visualise outliers in the continuous variables # and histograms to get an idea of the distribution for var in numerical: plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) fig = df.boxplot(column=var) fig.set_title('') fig.set_ylabel(var) plt.subplot(1, 2, 2) fig = df[var].hist(bins=20) fig.set_ylabel('number') fig.set_xlabel(var) plt.show() ###Output _____no_output_____ ###Markdown We can see the following variables present outliers:- Duration (top)- CreditAmount (top)- Age (top) 5.2.2 Continue with discrete variables We'll consider outliers those labels in discrete variables that are pressent in less than 1% of observations ###Code # outlies in discrete variables for var in discrete: (df.groupby(var)[var].count() / np.float(len(df))).plot.bar() plt.ylabel('Percentage of observations per label') plt.title(var) plt.show() #print(data[var].value_counts() / np.float(len(data))) print() ###Output _____no_output_____ ###Markdown All dicrete variables, except for CreditStatus (which is the TARGET) present outliers 5.2.3 Number of labels: Cardinality ###Code no_labels_ls = [] for var in categorical: no_labels_ls.append(len(df[var].unique())) tmp = pd.Series(no_labels_ls) tmp.index = pd.Series(categorical) tmp.plot.bar(figsize=(12,8)) plt.title('Number of categories in categorical variables') plt.xlabel('Categorical variables') plt.ylabel('Number of different categories') ###Output _____no_output_____ ###Markdown Most of them have a few categories, except for **purpose** that has 10. Yet ten categories is a reasonable number. 6. Split into train and test sets ###Code # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split(df, df.CreditStatus, test_size=0.15, random_state=0) X_train.shape, X_test.shape X_train.columns y_train.head() ###Output _____no_output_____ ###Markdown For some reason, CreditStatus shows up in X_train. Fine, but before I chose the columsn to go on Dataframe, I will to remove from X_train/X_test Part II: Feature engineering 7. Engineering missing values**7.1 Continuous variables** ###Code # print variables with missing data # keep in mind that now that we created those new temporal variables, we # are going to treat them as numerical and continuous as well: # examine percentage of missing values for col in numerical+year_vars: if X_train[col].isnull().mean()>0: print(col, X_train[col].isnull().mean()) ###Output _____no_output_____ ###Markdown No missing values in continuous variables **7.2 Discrete variables**I first redifine discrete variables (removing the TARGET) ###Code discrete = ['ExistingCreditsAtBank', 'NumberDependents'] # print variables with missing data for col in discrete: if X_train[col].isnull().mean()>0: print(col, X_train[col].isnull().mean()) ###Output _____no_output_____ ###Markdown There are no discrete variables variables with missing data 7.3 Engineering Missing data in Categorical Variables ###Code # print variables with missing data for col in categorical: if X_train[col].isnull().mean()>0: print(col, X_train[col].isnull().mean()) ###Output _____no_output_____ ###Markdown No missing values in Categorical variables**Sanity check** to make sure I have no more nulls ###Code # check absence of null values for var in X_train.columns: if X_train[var].isnull().sum()>0: print(var, X_train[var].isnull().sum()) # check absence of null values for var in X_test.columns: if X_test[var].isnull().sum()>0: print(var, X_test[var].isnull().sum()) ###Output _____no_output_____ ###Markdown Well done! I have no more NA in the variables. 8. Outlier engineering 8.1 Outlier identification and strategy setting ###Code numerical for var in numerical: plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) fig = df.boxplot(column=var) fig.set_title('') fig.set_ylabel(var) ###Output _____no_output_____ ###Markdown As we saw, the following variables present outliers:- Duration (top)- CreditAmount (top)- Age (top)To decide how to handle these outliers, I shall check for their distribution skewness. ###Code # let's find the skewness of above variables for var in ['Duration','CreditAmount','Age']: print(var, 'skewness is', df[var].skew() ) ###Output Duration skewness is 0.9874464796330478 CreditAmount skewness is 1.7628726662547298 Age skewness is 0.968198695171307 ###Markdown If we consider an absolute value of 1 to be normal skewness. I shall deal with outliers as follows: - Equal binning for variables with a skewness above 1: - CreditAmount - Bottom/top capping for: - Age - Duration 8.2 Equal binning ###Code # and now, I will divide into 10 quantiles for the rest of the exercise. I will leave the quantile # boundary and generate labels as well for comparison # create 10 labels, one for each quantile labels = ['Q'+str(i+1) for i in range(0,10)] print(labels) # bins with labels X_train['CreditAmount_label'], bins = pd.qcut(x=X_train.CreditAmount, q=10, labels=labels, retbins=True, precision=3, duplicates='raise') # bins with boundaries X_train['CreditAmount_disc'], bins = pd.qcut(x=X_train.CreditAmount, q=10, retbins=True, precision=3, duplicates='raise') X_train.head(10) # create 10 labels, one for each quantile labels = ['Q'+str(i+1) for i in range(0,10)] print(labels) # bins with labels X_train['CreditAmount_label'], bins = pd.qcut(x=X_train.CreditAmount, q=10, labels=labels, retbins=True, precision=3, duplicates='raise') # bins with boundaries X_train['CreditAmount_disc'], bins = pd.qcut(x=X_train.CreditAmount, q=10, retbins=True, precision=3, duplicates='raise') # we use pandas cut method and pass the quantile edges calculated in the training set X_test['CreditAmount_disc'] = pd.cut(x = X_test.CreditAmount, bins=bins, labels=labels) X_test['CreditAmount_label'] = pd.cut(x = X_test.CreditAmount, bins=bins, labels=labels) X_train.head() ###Output _____no_output_____ ###Markdown Check for NA before I continue ###Code X_train.isnull().sum() X_test.isnull().sum() # Replace missing values with most common label # with this command we capture the most frequent label (check output with plot above) X_train.groupby(['CreditAmount_label'])['CreditAmount_label'].count().sort_values(ascending=False).index[0] # Replace missing values with most common label # with this command we capture the most frequent label (check output with plot above) X_train.groupby(['CreditAmount_disc'])['CreditAmount_disc'].count().sort_values(ascending=False).index[0] # let's create a variable to replace NA with the most frequent label # both in train and test set def impute_na(df_train, df_test, variable): most_frequent_category = df_train.groupby([variable])[variable].count().sort_values(ascending=False).index[0] df_train[variable].fillna(most_frequent_category, inplace=True) df_test[variable].fillna(most_frequent_category, inplace=True) # and let's replace the NA for variable in ['CreditAmount_label']: impute_na(X_train, X_test, variable) ###Output _____no_output_____ ###Markdown Now I will drop the var_disc, as I will use the varibles_label. ###Code X_train = X_train.drop('CreditAmount_disc', axis = 1) X_test = X_test.drop('CreditAmount_disc', axis = 1) X_train.isnull().sum() X_test.isnull().sum() ###Output _____no_output_____ ###Markdown 8.2.1 Combine discretisation with label ordering according to the targetI shall proceed to encode using mean or risk enconding. ###Code ordered_labels = X_train.groupby(['CreditAmount_label'])['CreditStatus'].mean().to_dict() ordered_labels # replace the labels with the 'risk' (target frequency) # note that we calculated the frequencies based on the training set only X_train['CreditAmount_ordered'] = X_train.CreditAmount_label.map(ordered_labels) X_test['CreditAmount_ordered'] = X_test.CreditAmount_label.map(ordered_labels) ###Output _____no_output_____ ###Markdown To prevent overfitting, now I can delete those columns used to created the var_label ###Code X_train = X_train.drop('CreditAmount_label', axis = 1) X_test = X_test.drop('CreditAmount_label', axis = 1) #Also delete original column CreditAmount X_train = X_train.drop('CreditAmount', axis = 1) X_test = X_test.drop('CreditAmount', axis = 1) X_train.isnull().sum() X_test.isnull().sum() ###Output _____no_output_____ ###Markdown **8.3 Outlier handling with Top/Bottom caping**For variables with skew values less than absolute 1 - Duration (top)- Age (top) ###Code outliertopbot = ['Duration', 'Age'] for var in outliertopbot: IQR = df[var].quantile(0.75) - df[var].quantile(0.25) Lower_fence = df[var].quantile(0.25) - (IQR * 3) Upper_fence = df[var].quantile(0.75) + (IQR * 3) print([var],Lower_fence,'&', Upper_fence) def top_code(df, variable, top): return np.where(df[variable]>=top, top, df[variable]) for df in [X_train, X_test]: df['Duration'] = top_code(df, 'Duration', 1.0588) df['Age'] = top_code(df, 'Age', 1.1918) df['Age'].describe() ###Output _____no_output_____ ###Markdown 9. Engineer rare lables in categorical variables/discrete Remember the discrete variables['CreditStatus', 'ExistingCreditsAtBank', 'NumberDependents']but 'CreditStatus'is to be ignored, since it's the label ###Code # the following vars in the data set are encoded the wrong way: X_train['ExistingCreditsAtBank'] = X_train['ExistingCreditsAtBank'].astype('category') X_test['ExistingCreditsAtBank'] = X_test['ExistingCreditsAtBank'].astype('category') X_train['NumberDependents'] = X_train['NumberDependents'].astype('category') X_test['NumberDependents'] = X_test['NumberDependents'].astype('category') discrete = ['ExistingCreditsAtBank','NumberDependents'] # We do rare imputation for discrete variables def rare_imputation(variable): # find frequent labels / discrete numbers temp = X_train.groupby([variable])[variable].count()/np.float(len(X_train)) frequent_cat = [x for x in temp.loc[temp>0.03].index.values] X_train[variable] = np.where(X_train[variable].isin(frequent_cat), X_train[variable], 'Rare') X_test[variable] = np.where(X_test[variable].isin(frequent_cat), X_test[variable], 'Rare') categorical # find infrequent labels in categorical variables and replace by Rare for var in categorical: rare_imputation(var) # find infrequent labels in categorical variables and replace by Rare # remember that we are treating discrete variables as if they were categorical for var in discrete: rare_imputation(var) # let's check that it worked for var in categorical: (X_train.groupby(var)[var].count() / np.float(len(X_train))).plot.bar() plt.ylabel('Percentage of observations per label') plt.title(var) plt.show() # let's check that it worked for var in discrete: (X_train.groupby(var)[var].count() / np.float(len(X_train))).plot.bar() plt.ylabel('Percentage of observations per label') plt.title(var) plt.show() ###Output _____no_output_____ ###Markdown YES! It did work well! 10. Encoding Categorical VariablesI will proceed with Risk Encoding. ###Code def encode_categorical_variables(var, target): # make label to price dictionary ordered_labels = X_train.groupby([var])[target].mean().to_dict() # encode variables X_train[var] = X_train[var].map(ordered_labels) X_test[var] = X_test[var].map(ordered_labels) # encode labels in categorical vars for var in categorical: encode_categorical_variables(var, 'CreditStatus') # encode labels in discrete vars for var in discrete: encode_categorical_variables(var, 'CreditStatus') #let's inspect the dataset X_train.head() ###Output _____no_output_____ ###Markdown 11.Feature Scaling ###Code # Check before it is all numerical X_train.dtypes # Check before it is all numerical X_test.dtypes #Find column names X_train.columns ###Output _____no_output_____ ###Markdown **REMOVE now TARGET from Training_vars (do not include in next cell)** ###Code # let's create a list of the training variables training_vars = ['Age', 'CheckingAcctStat', 'CreditHistory', 'Duration', 'Employment', 'ExistingCreditsAtBank', 'ForeignWorker', 'Housing', 'InstallmentRatePecnt', 'Job', 'NumberDependents', 'OtherDetorsGuarantors', 'OtherInstalments', 'PresentResidenceTime', 'Property', 'Purpose', 'Savings', 'SexAndStatus', 'Telephone', 'CreditAmount_ordered'] print('total number of variables to use for training: ', len(training_vars)) # let's find the skewness of above variables for var in training_vars: print(var, 'skewness is', df[var].skew() ) ###Output Age skewness is 0.9199389285834775 CheckingAcctStat skewness is -0.40010344349500326 CreditHistory skewness is 0.2972930381671155 Duration skewness is 0.8997971880707145 Employment skewness is 0.24357676172471576 ExistingCreditsAtBank skewness is 1.466519308409714 ForeignWorker skewness is -5.786146840774254 Housing skewness is 1.8779002533367668 InstallmentRatePecnt skewness is -0.7882275154092137 Job skewness is -0.37346316500487303 NumberDependents skewness is 4.6000535635460915 OtherDetorsGuarantors skewness is 1.9845228806663033 OtherInstalments skewness is 1.700950808313577 PresentResidenceTime skewness is -0.4352425662787747 Property skewness is 0.5072604725345503 Purpose skewness is 0.02994508437068712 Savings skewness is -1.248643987333662 SexAndStatus skewness is -0.013750884707033329 Telephone skewness is 0.44884989951233023 CreditAmount_ordered skewness is 0.3514365876338875 ###Markdown OK, I shall do : - Robust Scaling for variables with an absolute skew above 2 - MinMax Scaling for variables with an absoulute skew between 1 and 2 - Standard for the rest ###Code for var in training_vars: max_abs_value = 2 if abs(df[var].skew()) >= max_abs_value: print (var) # I Will start with Robust Scaling scalerR = RobustScaler() # call the object X_train_scaledR = scalerR.fit_transform(X_train[['ForeignWorker','NumberDependents']]) # fit the scaler to the train set, and then scale it X_test_scaledR= scalerR.transform(X_test[['ForeignWorker','NumberDependents']]) # scale the test set for var in training_vars: max_abs_value = 2 if (1 <= abs(df[var].skew()) <=2): print (var) #Continue with MinMax Scaling scalerMM = MinMaxScaler() # create an instance X_train_scaledMM= scalerMM.fit_transform(X_train[['ExistingCreditsAtBank','Housing','OtherDetorsGuarantors','OtherInstalments', 'Savings']]) # fit the scaler to the train set and then transform it X_test_scaledMM= scalerMM.transform(X_test[['ExistingCreditsAtBank','Housing','OtherDetorsGuarantors','OtherInstalments', 'Savings']]) # transform (scale) the test set for var in training_vars: max_abs_value = 2 if ((df[var].skew()) <=1): print (var) # Normal Standarization scalerN = StandardScaler() # create an object X_train_scaledN = scalerN.fit_transform(X_train[['Age', 'CheckingAcctStat', 'CreditHistory', 'Duration','Employment', 'ForeignWorker', 'InstallmentRatePecnt', 'Job','PresentResidenceTime', 'Property','Purpose', 'Savings', 'SexAndStatus', 'Telephone', 'CreditAmount_ordered']]) # fit the scaler to the train set, and then transform it X_test_scaledN = scalerN.transform(X_test[['Age', 'CheckingAcctStat', 'CreditHistory', 'Duration','Employment', 'ForeignWorker', 'InstallmentRatePecnt', 'Job','PresentResidenceTime', 'Property','Purpose', 'Savings', 'SexAndStatus', 'Telephone', 'CreditAmount_ordered']]) # transform the test set ###Output _____no_output_____ ###Markdown 11. Sanity check (nulls) ###Code X_train = pd.DataFrame(X_train) X_test = pd.DataFrame(X_test) y_train = pd.DataFrame(y_train) y_test = pd.DataFrame(y_test) X_train.isnull().sum() X_test.isnull().sum() y_train.isnull().sum() y_test.isnull().sum() ###Output _____no_output_____ ###Markdown 12. Save selected variables for features predictions ###Code X_train_scaledR = pd.DataFrame(X_train_scaledR, columns = ['ForeignWorker','NumberDependents']) X_train_scaledMM = pd.DataFrame(X_train_scaledMM, columns = ['ExistingCreditsAtBank','Housing','OtherDetorsGuarantors','OtherInstalments', 'Savings']) X_train_scaledN = pd.DataFrame(X_train_scaledN, columns = ['Age', 'CheckingAcctStat', 'CreditHistory', 'Duration','Employment', 'ForeignWorker', 'InstallmentRatePecnt', 'Job','PresentResidenceTime', 'Property','Purpose', 'Savings', 'SexAndStatus', 'Telephone', 'CreditAmount_ordered']) ###### X_test_scaledR = pd.DataFrame(X_test_scaledR, columns = ['ForeignWorker','NumberDependents']) X_test_scaledMM = pd.DataFrame(X_test_scaledMM, columns = ['ExistingCreditsAtBank','Housing','OtherDetorsGuarantors','OtherInstalments', 'Savings']) X_test_scaledN = pd.DataFrame(X_test_scaledN, columns = ['Age', 'CheckingAcctStat', 'CreditHistory', 'Duration','Employment', 'ForeignWorker', 'InstallmentRatePecnt', 'Job','PresentResidenceTime', 'Property','Purpose', 'Savings', 'SexAndStatus', 'Telephone', 'CreditAmount_ordered']) X_train = pd.concat([X_train_scaledR, X_train_scaledMM, X_train_scaledN], axis=1) X_test = pd.concat([X_test_scaledR, X_test_scaledMM, X_test_scaledN], axis=1) X_train.head() X_train = pd.DataFrame(X_train) X_train.to_csv('X_train.csv', index=False) X_test = pd.DataFrame(X_test) X_test.to_csv('X_test.csv', index=False) X_test.head() y_train = pd.DataFrame(y_train) y_train.to_csv('y_train.csv', index=False) y_train.head() y_test = pd.DataFrame(y_test) y_test.to_csv('y_test.csv', index=False) ###Output _____no_output_____ ###Markdown MP3 Preparation and ProcessingThis notebook describes the process taken to prepare MP3 files for the DJ Transition Models. We will use the madmom package to label beats and downbeats, and explain how transition points were tagged for model training, before demonstrating how this labelling approach can be used to generate transitions programmatically. We will then use the librosa package to extract MP3 spectrograms and chromagrams to be used as model inputs. ###Code import madmom import eyed3 import os import pickle import pandas as pd import numpy as np import pydub from pydub import AudioSegment ###Output _____no_output_____ ###Markdown Beat ExtractionThe [madmom package](https://madmom.readthedocs.io/en/v0.16/) was used to label timestamps of beats and downbeats. In particular, we used madmom's [RNNBeatProcessor class](https://madmom.readthedocs.io/en/v0.16/modules/features/beats.html), which uses multiple trained RNNs to produce a beat activation function sampled at 100 frames per second. The output of this function was then passed through madmom's Dynamic Bayesian Network (DBN) beat tracking processor to produce beat timestamps. To identify downbeat location, the beat timestamps were passed through madmom's [RNNBarProcessor class](https://madmom.readthedocs.io/en/v0.16/modules/features/downbeats.html) and DBN bar tracking processor, with the assumption that all songs in the dataset have four beats per bar. The function below implements this process, taking an MP3 file's location as input and returning an array with two columns, the beat timestamps and the downbeat counter between 1 and 4. ###Code def get_bars_beats(filename): """ Function for using the madmom package to extract beat and downbeat locations from an MP3 file. Returns None if madmom throws an error while processing the file. Args: filename: directory of an MP3 file Returns: bar_beats: array of two columns, beat timestamps and downbeat index """ try: beats = madmom.features.beats.RNNBeatProcessor( online=True)(filename) when_beats = madmom.features.beats.DBNBeatTrackingProcessor( fps=100)(beats) downbeat_prob = madmom.features.downbeats.RNNBarProcessor()( (filename,when_beats)) bar_beats = madmom.features.downbeats.DBNBarTrackingProcessor( beats_per_bar = [4,4])(downbeat_prob) return bar_beats except: return None ###Output _____no_output_____ ###Markdown An example output of this function can be seen below: ###Code import warnings warnings.filterwarnings('ignore') bars_beats = get_bars_beats('chris_lake_-_lose_my_mind.mp3') pd.DataFrame(bars_beats,columns = ['Beat Timestamp','Downbeat']).iloc[:20] ###Output _____no_output_____ ###Markdown Data LabellingWe will now describe in more detail the nature of the data used to train the model and the process taken to label it appropriately. ObjectiveThere are many elements to a DJ's successful blended transition between two songs. A key aspect of making blended transitions smooth is phrase-matching, whereby musical phrases in the incoming and outgoing songs are aligned with each other to ensure that the musical structure of the mix is maintained. EDM, and house music in particular, is almost entirely made up of 8 bar phrases, with each bar containing 4 beats, making a phrase 32 beats long in total. To phrase-match, the incoming and outgoing songs must be being played at matching tempo, and the first downbeat of a phrase in the incoming song must be matched with the first downbeat of a phrase in the outgoing song. As the transition is performed between the two songs, either by a simple crossfade, by manually adjusting volume and EQ, or a more complex transition involving FX, the two songs will then remain on-phrase until the transition is complete and only the incoming song can be heard. The aim of the DJ Transition Model is to allow these smooth phrase-matched transitions between songs to be created in an automated manner. In order to accomplish this, it is necessary to determine:1. The beat timestamp where each phrase in the song starts2. When the song is incoming, the appropriate phrases at the start of the song at which to end a transition to avoid issues like vocal clash or dead air. For example, house songs can often contain long introductions with just a beat and minimal vocals; typically we'd like the transition to end just as this introduction ends, rather than in the middle of it or a minute too late in the middle of the buildup or drop.3. When the song is outgoing, the appropriate phrases during the outro of the song at which to start a transition. We don't want to create a jarring transition by fading out during a high energy drop or vocal section if it can be avoided, and when creating a blended transition, we normally don't want to let the outgoing song finish entirely before starting to bring in the new song.There are of course exceptions to the process described in (2) and (3) above, and dynamic and engaging transitions can be created by breaking these rules. However, these provide a good starting point for creating stable, repeatable blended transitions. We also make the assumption that transitions only happen at the intro or outro of songs, rather than in the middle, e.g. after a drop or switching immediately before a drop. Extending this process to these and other more complex transitions is an avenue for future work. Labelling processIn order to train a model to identify the three data points above for a song, we manually labelled the appropriate transition points in the intros and outros of our training set of songs using the bar/beat outputs above. Beats which mark the start of a phrase where a transition should start are labelled 'Start', and beats which mark the start of a phrase by which a transition should end are labelled 'End'. Where it could be appropriate for a transition to either start or end at a phrase, it is labelled 'Start/End'. Labels in the intro or outro are placed a phrase (i.e. 32 beats) apart, and once the first 'Start' is labelled, every subsequent phrase is labelled until the final 'End' is reached. In exceptional circumstances, only an 'End' label will be present in the intro (and analogously only a 'Start' in the outro) if the transition should end before a single phrase has elapsed in the song.The resulting labelled dataset is saved as a .csv. We have included below the labelled version of the bar/beats output above. Here, the first phrase starts at the first beat (this is not always the case, e.g. if there is a vocal intro or silence at the beginning of the MP3). We print below the first 10 beats, along with all labelled beats in the intro and outro. Note that the madmom downbeat tracker has classified the downbeats incorrectly for this song. ###Code bar_beat_labelled_example = pd.read_csv('Chris Lake - Lose My Mind.csv',index_col = 0) bar_beat_labelled_example.columns = ['Beat Timestamp','Downbeat','Intro Label','Outro Label'] bar_beat_labelled_example.head(10) bar_beat_labelled_example.dropna(thresh = 1,subset=['Intro Label','Outro Label']) ###Output _____no_output_____ ###Markdown We can plot the waveforms of the intros and outros of this song to illustrate the phrase locations. We can see by the waveforms that the structure changes at around 1:00, where our 'End' label is in the intro, and the outro starts just past 4:20, where our labels allow the transition to start either one phrase before or at the start of the change in waveform. ###Code import librosa,librosa.display import matplotlib.pyplot as plt x, sr = librosa.load('chris_lake_-_lose_my_mind.mp3') plt.figure(figsize=(14, 5)) librosa.display.waveplot(x[:90*sr], sr=sr) plt.show() plt.figure(figsize=(14, 5)) librosa.display.waveplot(x[210*sr:], sr=sr,offset = 210) plt.show() ###Output _____no_output_____ ###Markdown Generating transitionsWe will now briefly illustrate how these labels can be used to create smooth transitions. We will use the pydub package for basic audio manipulation. These transitions are relatively simple, using only tempo changes and crossfades. However, the underlying timing information can be used to create more sophisticated transitions which use EQ manipulation or effects - the fundamental aim of this work is to identify the correct timing. Example mixes containing only transitions created using these labels and the method below can be found on [Soundcloud](https://soundcloud.com/transition-models). [One mix](https://soundcloud.com/transition-models/transitionmodel-example-mix-1) was created using a manually selected tracklist to illustrate tempo changes using well-known songs. The [others](https://soundcloud.com/transition-models/sets/tracklist-generator-examples) were created by using the [Tracklist Generator](https://github.com/gmeehan96/tracklist-generator) to generate a variety of house music tracklists, before mixing these using the method below. ###Code def match_target_amplitude(sound, target_dBFS): """Utility function which normalizes the average gain of an input audio segment in pydub""" change_in_dBFS = target_dBFS - sound.dBFS return sound.apply_gain(change_in_dBFS) def speed_change(sound,speed=1.0): """Utility function for changing the tempo of the song by multiplying by 'speed' Args: sound: pydub sound segment speed: ratio by which to change speed (e.g. 0.9 will slow down to 90% of original speed) Returns: sound segment with changed speed """ sound_with_altered_frame_rate = sound._spawn(sound.raw_data, overrides={ "frame_rate": int(sound.frame_rate * speed) }) return sound_with_altered_frame_rate.set_frame_rate(sound.frame_rate) def get_starts_ends(song,labels_dict): """Pulls out the start and end timings for the intro and outro of a song Args: song: Song title labels_dict: Dictionary indexed by song title which contains labelled transition points Returns: Lists containing start timestamps, end timestamps, and possible transition lengths for both incoming and outgoing transitions involving the song """ df = labels_dict[song] incoming = [str(x) for x in df.values[:,2].tolist()] incoming_starts = [i for i in range(len(incoming)) if 'Start' in incoming[i]] incoming_ends = [i for i in range(len(incoming)) if 'End' in incoming[i]] incoming_length_combos = [y-x for y in incoming_ends for x in incoming_starts] incoming_possible_lengths = list(set([n for n in incoming_length_combos if n > 0])) incoming_output = [incoming_starts,incoming_ends,incoming_possible_lengths] outgoing = [str(x) for x in df.values[:,3].tolist()] outgoing_starts = [i for i in range(len(outgoing)) if 'Start' in outgoing[i]] outgoing_ends = [i for i in range(len(outgoing)) if 'End' in outgoing[i]] outgoing_length_combos = [y-x for y in outgoing_ends for x in outgoing_starts] outgoing_possible_lengths = list(set([n for n in outgoing_length_combos if n > 0])) outgoing_output = [outgoing_starts,outgoing_ends,outgoing_possible_lengths] return incoming_output, outgoing_output def get_transition_info(song_1,song_2): """Produces the configuration of a transition between two songs. Args: song_1: Title of outgoing song song_2: Title of incoming song Returns: song_1_start: Beat index of outgoing transition start in song_1. This is the earliest possible transition point with the relevant num_phrases. song_2_start: Beat index of incoming transition start in song_2. This is the earliest possible transition point with the relevant num_phrases. num_phrases: Length of the transition in phrases. This is the maximum compatible transition length """ _,song_1_outgoing = get_starts_ends(song_1) song_2_incoming,_ = get_starts_ends(song_2) poss_lengths = [x for x in range(1,4) if \ 32*x in song_1_outgoing[-1] and 32*x in song_2_incoming[-1]] num_phrases = np.max(poss_lengths) num_beats = 32*num_phrases song_1_start = np.min([x for x in song_1_outgoing[0] if x+num_beats in song_1_outgoing[1]]) song_2_start = np.min([x for x in song_2_incoming[0] if x+num_beats in song_2_incoming[1]]) return song_1_start,song_2_start,num_phrases def get_transition(song_1,song_2, labels_dict, name_filenames_map, prologue_length = 128, offsets = [8,16,32,32]): """Main function for generating a transition between two songs. It produces a pydub audio segment containing the transition itself as well as a prelude of the outgoing song and a prologue of the incoming song to give additional context. The full transition is divided into three stages: - In the prelude stage, the two phrases immediately preceding the start of the transition in the incoming song are played. During this time, the tempo of the outgoing song is changed to match the tempo of the incoming song if necessary. - In the transition stage, the outgoing song is faded out and the incoming song is faded in. There is an offset so that the outgoing song begins its fade out a bit later than the incoming song starts to come in, and the incoming song is up to full volume before the outgoing song is fully faded out. This keeps the gain level more consistent. - In the prologue stage, a few more phrases of the incoming song are played to give more context to the transition. Args: song_1: Title of outgoing song song_2: Title of incoming song labels_dict: Dictionary indexed by song title which contains labelled transition points name_filenames_map: Dictionary indexed by song title which contains the location of the MP3 file prologue_length: The number of beats of the incoming song to play after the transition is complete offsets: The number of beats to offset the fade in of the incoming song by depending on the length of the transition in phrases Returns: transition: pydub audio segment containing the full transition with prelude and prologue. This can easily be exported as a .wav file. """ song_1_start,song_2_start,num_phrases = get_transition_info(song_1,song_2) song_1_bpm = get_bpm(song_1) song_2_bpm = get_bpm(song_2) song_1_beats = labels_dict[song_1].values[:,0] song_2_beats = labels_dict[song_2].values[:,0] song_1_seg = match_target_amplitude( AudioSegment.from_mp3(name_filenames_map[song_1]), -10.0) song_2_seg = match_target_amplitude( AudioSegment.from_mp3(name_filenames_map[song_2]), -10.0) from_bpm = np.round(song_1_bpm,1) to_bpm = np.round(song_2_bpm,1) avg_bpm = (from_bpm+to_bpm)/2 #Beat times are multiplied by 1000 because pydub audio segments are indexed in milliseconds song_1_rest = song_1_seg[song_1_beats[song_1_start-64]*1000:song_1_beats[song_1_start-32]*1000] song_1_prelude = speed_change( song_1_seg[song_1_beats[song_1_start-32]*1000:song_1_beats[song_1_start-16]*1000], avg_bpm/from_bpm) song_1_prelude_2 = speed_change( song_1_seg[song_1_beats[song_1_start-16]*1000:song_1_beats[song_1_start]*1000], to_bpm/from_bpm) song_1_start_time = int(song_1_beats[song_1_start]*1000) song_1_offset = int(song_1_beats[song_1_start+offsets[num_phrases-1]]*1000) song_1_end_time = int(song_1_beats[song_1_start+32*num_phrases]*1000) song_1_transition = speed_change(song_1_seg[song_1_start_time:song_1_offset],to_bpm/from_bpm) song_1_transition_rest = speed_change(song_1_seg[song_1_offset:song_1_end_time],to_bpm/from_bpm) song_1_transition += song_1_transition_rest.fade_out(len(song_1_transition_rest)) song_2_start_time = int(song_2_beats[song_2_start]*1000) song_2_end_time = int(song_2_beats[song_2_start+32*num_phrases]*1000) song_2_transition = song_2_seg[song_2_start_time:song_2_end_time].fade_in( 2*(song_1_offset-song_1_start_time)) transition_finish_time = song_2_beats[song_2_start+32*num_phrases+prologue_length]*1000 song_2_prologue = song_2_seg[song_2_end_time:transition_finish_time] transition = pydub.effects.normalize(song_1_rest+song_1_prelude + song_1_prelude_2 ) transition += pydub.effects.normalize(song_2_transition.overlay(song_1_transition)) transition += pydub.effects.normalize(song_2_prologue) return pydub.effects.normalize(transition) def get_transition_tl(tl,labels_dict,name_filenames_map): """Wrapper function which creates a full mix given a tracklist of song titles Args: tl: List of song titles labels_dict: Dictionary indexed by song title which contains labelled transition points name_filenames_map: Dictionary indexed by song title which contains the location of the MP3 file Returns: mix: pydub audio segment containing the full mix. This can easily be exported as a .wav file. """ transition_infos = [] for i in range(len(tl)-1): transition_infos.append(get_transition_info(tl[i],tl[i+1])) #We initialise the mix with the start of the first song first_song_seg = match_target_amplitude( AudioSegment.from_mp3(name_filenames_map[tl[0]]), -10.0) mix = first_song_seg[:labels_dict[tl[0]].values[transition_infos[0][0]-64,0]*1000] for i in range(len(tl)-1): try: next_ind = transition_infos[i+1][0]-64 - transition_infos[i][1] except IndexError: #We need to add on the remainder of the final song next_ind = labels_dict[tl[-1]].shape[0] - transition_infos[-1][-1]*32\ - transition_infos[-1][1]- 1 mix += get_transition(tl[i],tl[i+1],transition_infos[i],next_ind) return mix ###Output _____no_output_____ ###Markdown We can see below the separate song_1 and song_2 waveforms of the transition section for an example transition, to illustrate the fade in and fade out points. ###Code s1, sr = librosa.load('song_1_transition.wav') plt.figure(figsize=(14, 5)) librosa.display.waveplot(s1, sr=sr) plt.show() s2, sr = librosa.load('song_2_transition.wav') plt.figure(figsize=(14, 5)) librosa.display.waveplot(s2, sr=sr) plt.show() ###Output _____no_output_____ ###Markdown Audio FeaturesOnce the songs have been labelled, we need to extract from each MP3 the audio features to be used as input into the DJ Transition Model. We have chosen to take a deep learning approach directly on the [Mel Spectrogram](https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html) and [Chromagram](https://librosa.org/doc/main/generated/librosa.feature.chroma_cqt.html) of the song. We will extract these using the librosa package. ###Code def get_grams(filename): """Extracts chromagram and mel spectrogram from MP3 file using the librosa package. Args: filename: directory of an MP3 file Returns: chromagram: chromagram numpy array of shape (t,12) spectrogram: Mel spectrogram numpy array of shape (t,128) """ y, sr = librosa.load(filename) chromagram = librosa.feature.chroma_cqt(y=y, sr=sr,hop_length = 256) spectrogram = librosa.feature.melspectrogram(y=y, sr=sr,hop_length = 256) return chromagram.T, spectrogram.T chromagram,spectrogram = get_grams('chris_lake_-_lose_my_mind.mp3') print('Chromagram shape: ',chromagram.shape) print('Mel spectrogram shape: ',spectrogram.shape) ###Output Chromagram shape: (25142, 12) Mel spectrogram shape: (25142, 128) ###Markdown We can plot the chromagram and spectrogram for this song: ###Code fig, ax = plt.subplots() img = librosa.display.specshow(chromagram.T, y_axis='chroma', ax=ax) ax.set(title='Chromagram example') fig.colorbar(img, ax=ax) plt.show() fig, ax = plt.subplots() spectrogram_db = librosa.power_to_db(spectrogram.T, ref=np.max) img = librosa.display.specshow(spectrogram_db, y_axis='mel', ax=ax) ax.set(title='Mel spectrogram example') plt.show() ###Output _____no_output_____
Fundamentals/elementary_notions/amdahls_law.ipynb
###Markdown Amdahl's lawIn this notebook, we will explore and discuss Amdahl's law. Speed up from parallelismAssume that you have a program that requires 100 seconds to complete. If you can fully parallelize the program how long does it take to run it on two cores? ###Code # execution time with 2 processors 100/2 ###Output _____no_output_____ ###Markdown So it will take 50 seconds to run the program on two cores. Let's define speed up (S) as: $S = \frac{OldExecutionTime}{NewExecutionTime}$This result in a speed up of S = (old execution time)/(new execution time) = 100/50 = 2. Similarly, if we have 100 cores available, we can run the program in just 1 second (100 seconds/100 cores), resulting in a speed up of 100 (100 second/1 second). Speed up from non-fully paralellizable programsIf 10% of the program cannot be parallized and have to run sequentially, how long does it take to run it on 2 cores? ###Code # execution time for 2 cores when 10% of the program is serial 0.10 * 100 + (1-0.10)*100/2 ###Output _____no_output_____ ###Markdown So, when 90% of the program can be parallized, the speed up is now only, S = 100/55 = 1.8. When you have 100 processors, what will the speed up be? ###Code # speed up with 2 cores S2 = 100/55 # execution time from 100 cores, when 10% of the program is serial E100 = 0.10 * 100 + (1-0.10)*100/100 # speed up with 100 cores S100 = 100/(0.10 * 100 + (1-0.10)*100/100) print("S2=", S2, "\nE100=", E100, "\nS100=", S100) ###Output _____no_output_____ ###Markdown As you can see from the calculation above, the speed up is not anywhere near 100x anymore, but it is closer to only 9x. Amdahl's LawThis was observed by Gene Amdahl in the 1960s. He stated that the speed up of a program whose a portion, _p_, of its execution time can be parallized is limited to:$S = \frac{1}{1-p}$In our case, _p_ was 0.9 (90%), so our speed up is limited to:$S = \frac{1}{1-0.9} = \frac{1}{0.1}=10$ Let's do some experiments to see how much speed up we can get with a thousand cores, 1 million cores, and 1 billion cores. ###Code # execution time with 1 core E1 = 100 # a thousand cores E1000 = 0.10 * E1 + (1-0.10)*E1/1000 S1000 = E1/E1000 # a million cores Emillion = 0.10 * E1 + (1-0.10)*E1/1000000.0 Smillion = E1/Emillion # a billion cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillion print("S1000=",S1000) print("Smillion=",Smillion) print("Sbillion=",Sbillion) ###Output _____no_output_____ ###Markdown So, we're getting closer to 10 but not quite. That will only get reached when we have infinite number of processors. $E_{\infty} = 0.10 * E1 + \frac{(1-0.10)*E1}{\infty}$Since anything divided by infinity is zero, we have $E_{\infty} = 10$, resulting in a speed up $S = 10$. ###Code # infinite cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillion ###Output _____no_output_____ ###Markdown If we graph the speed up $S$ versus the number of processors for different number of processors, you'd get the following graphs, similar to the one in the [Wikipedia](https://en.wikipedia.org/wiki/Amdahl%27s_law). ###Code from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 4.00] plt.rcParams["figure.autolayout"] = True fig, ax = plt.subplots() # x-axis values n = [1, 2, 4, 8, 16, 64, 128, 256, 512, 1024] # Y-axis values is the speed up, calculated using s = 1 / (1-p) + p/n serialPortion = [.5, .75, .9, .95, .99] for p in serialPortion: y = [] for x in n: y.append(1.0/((1.0-p)+p/x)) ax.plot(n, y, label=str(p)) leg = plt.legend(loc='upper right') leg.get_frame().set_alpha(0.6) # function to show the plot plt.show() ###Output _____no_output_____ ###Markdown Amdahl's lawIn this notebook, we will explore and discuss Amdahl's law. Speed up from parallelismAssume that you have a program that requires 100 seconds to complete. If you can fully parallelize the program how long does it take to run it on two cores? ###Code # execution time with 2 processors 100/2 ###Output _____no_output_____ ###Markdown So it will take 50 seconds to run the program on two cores. Let's define speed up (S) as: $S = \frac{OldExecutionTime}{NewExecutionTime}$This result in a speed up of S = (old execution time)/(new execution time) = 100/50 = 2. Similarly, if we have 100 cores available, we can run the program in just 1 second (100 seconds/100 cores), resulting in a speed up of 100 (100 second/1 second). Speed up from non-fully paralellizable programsIf 10% of the program cannot be parallized and have to run sequentially, how long does it take to run it on 2 cores? ###Code # execution time for 2 cores when 10% of the program is serial 0.10 * 100 + (1-0.10)*100/2 ###Output _____no_output_____ ###Markdown So, when 90% of the program can be parallized, the speed up is now only, S = 100/55 = 1.8. When you have 100 processors, what will the speed up be? ###Code # speed up with 2 cores S2 = 100/55 # execution time from 100 cores, when 10% of the program is serial E100 = 0.10 * 100 + (1-0.10)*100/100 # speed up with 100 cores S100 = 100/(0.10 * 100 + (1-0.10)*100/100) print("S2=", S2, "\nE100=", E100, "\nS100=", S100) ###Output _____no_output_____ ###Markdown As you can see from the calculation above, the speed up is not anywhere near 100x anymore, but it is closer to only 9x. Amdahl's LawThis was observed by Gene Amdahl in the 1960s. He stated that the speed up of a program whose a portion, _p_, of its execution time can be parallized is limited to:$S = \frac{1}{1-p}$In our case, _p_ was 0.9 (90%), so our speed up is limited to:$S = \frac{1}{1-0.9} = \frac{1}{0.1}=10$ Let's do some experiments to see how much speed up we can get with a thousand cores, 1 million cores, and 1 billion cores. ###Code # execution time with 1 core E1 = 100 # a thousand cores E1000 = 0.10 * E1 + (1-0.10)*E1/1000 S1000 = E1/E1000 # a million cores Emillion = 0.10 * E1 + (1-0.10)*E1/1000000.0 Smillion = E1/Emillion # a billion cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillion print("S1000=",S1000) print("Smillion=",Smillion) print("Sbillion=",Sbillion) ###Output _____no_output_____ ###Markdown So, we're getting closer to 10 but not quite. That will only get reached when we have infinite number of processors. $E_{\infty} = 0.10 * E1 + \frac{(1-0.10)*E1}{\infty}$Since anything divided by infinity is zero, we have $E_{\infty} = 10$, resulting in a speed up $S = 10$. ###Code # infinite cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillionprint("Sbillion=",Sbillion) ###Output _____no_output_____ ###Markdown If we graph the speed up $S$ versus the number of processors for different number of processors, you'd get the following graphs, similar to the one in the [Wikipedia](https://en.wikipedia.org/wiki/Amdahl%27s_law). ###Code from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 4.00] plt.rcParams["figure.autolayout"] = True fig, ax = plt.subplots() # x-axis values n = [1, 2, 4, 8, 16, 64, 128, 256, 512, 1024] # Y-axis values is the speed up, calculated using s = 1 / (1-p) + p/n serialPortion = [.5, .75, .9, .95, .99] for p in serialPortion: y = [] for x in n: y.append(1.0/((1.0-p)+p/x)) ax.plot(n, y, label=str(p)) leg = plt.legend(loc='upper right') leg.get_frame().set_alpha(0.6) # function to show the plot plt.show() ###Output _____no_output_____
dayoung_trial1/12_05_sigungoo(no_groupby) .ipynb
###Markdown 1. 과최적화 일어났는지 K-fold 교차검증 ###Code # OLS import statsmodels.api as sm X= result4 y= np.log(df2['sales']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) df_train=pd.concat([y_train, X_train], axis=1) model= sm.OLS.from_formula("sales ~ "+" + ".join(df_train.columns[1:]), data=df_train) result = model.fit() print(result.summary()) from sklearn.model_selection import KFold df_kfold=pd.concat([y,X], axis=1) train_r2=[] test_r2 =[] scores = np.zeros(5) cv = KFold(5, shuffle=True) for i, (idx_train, idx_test) in enumerate(cv.split( df_kfold)): df_train = df_kfold.iloc[idx_train] df_test = df_kfold.iloc[idx_test] model = sm.OLS.from_formula("sales ~"+"+".join(df_kfold.columns[1:]), data= df_kfold) result = model.fit() pred = result.predict(df_test) rss = (( df_kfold.sales - pred) ** 2).sum() tss = (( df_kfold.sales - df_kfold.sales.mean())** 2).sum() rsquared = 1 - rss / tss scores[i] = rsquared print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared)) train_r2.append(result.rsquared) test_r2.append(rsquared) plt.plot(test_r2, 'ro', label="test R2") plt.hlines(train_r2, 0, 4, label="train R2") plt.legend() plt.xlabel("seed") plt.ylabel("R-quare") plt.ylim(0.5, 1.2) ###Output _____no_output_____
MLCourse.ipynb
###Markdown ###Code salary=1000 bonus=6000 tax=150 netSalary=salary+bonus-tax print(netSalary) firstName="Ashok" lastName="Movva" print(firstName+" "+lastName) firstName="ashok" salary=5000 print("My name is {} and my salary is {}".format(firstName,salary)) ###Output My name is ashok and my salary is 5000
demos/use-cases/hateful-twitters.ipynb
###Markdown Detection of Twitter users who use hateful lexicon with graph machine learning Run the latest release of this notebook: We consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Veličković et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.2.1 # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.2.1") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.2.1, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt %matplotlib inline def remove_prefix(text, prefix): return text[text.startswith(prefix) and len(prefix):] def plot_history(history): metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())])) for m in metrics: # summarize history for metric m plt.plot(history.history[m]) plt.plot(history.history['val_' + m]) plt.title(m, fontsize=18) plt.ylabel(m, fontsize=18) plt.xlabel('epoch', fontsize=18) plt.legend(['train', 'validation'], loc='best') plt.show() ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, 'users_neighborhood_anon.csv')) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat['hate'] = np.where(feat['hate']=='hateful', 1, np.where(feat['hate']=='normal', 0, 2)) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing!=0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(['hashtags'], axis=1, inplace=True) # Drop centrality based measures feat.drop(columns=['betweenness', 'eigenvector', 'in_degree', 'out_degree'], inplace=True) feat.drop(columns=['created_at'], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method='yeo-johnson', standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {'lines.linewidth': 3, 'figure.figsize':(12,6)} sns.set_context("paper", rc = sns_rc) sns.set_style("whitegrid", {'axes.grid' : False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=['user_id'], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data['hate']!=2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=['hate']) annotated_user_targets = annotated_users[['hate']] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split(annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print("Train data number of 0s {} and 1s {}".format(np.sum(train_targets==0), np.sum(train_targets==1))) print("Test data number of 0s {} and 1s {}".format(np.sum(test_targets==0), np.sum(test_targets==1))) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=['hate']) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight('balanced', np.unique(train_targets), train_targets[:,0]) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = 'graphsage' # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50; num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == 'graphsage': generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == 'gcn': generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets, ) elif model_type == 'gat': generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == 'graphsage': base_model = GraphSAGE( layer_sizes=[32, 32], generator=train_gen, bias=True, dropout=0.5, ) x_inp, x_out = base_model.default_model(flatten_output=True) prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gcn': base_model = GCN( layer_sizes=[32, 16], generator = generator, bias=True, dropout=0.5, activations=["elu", "elu"] ) x_inp, x_out = base_model.node_model() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gat': base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.node_model() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit_generator` method. ###Code class_weight = None if model_type == 'graphsage': class_weight=train_class_weights history = model.fit_generator( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate_generator(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict_generator(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions>0.5)*1).flatten() test_df = pd.DataFrame({"Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:,0]}) roc_auc = metrics.roc_auc_score(test_df['True'].values, test_df['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df['True'], test_df['Predicted_class']) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve(test_df['True'], test_df['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == 'graphsage': # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict_generator(generator=all_gen, ) elif model_type == 'gcn': # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) elif model_type == 'gat': # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed['label'] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14,8,)) ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"), cmap="jet", alpha=alpha) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title('{} visualization of embeddings for tweeter dataset'.format(transform.__name__), fontsize=24) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV(cv=5, class_weight=class_weight, max_iter=10000) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1]>0.5)*1).flatten() test_df_lr = pd.DataFrame({"Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:,0]}) roc_auc_lr = metrics.roc_auc_score(test_df_lr['True'].values, test_df_lr['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr['True'], test_df_lr['Predicted_class']) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(test_df_lr['True'], test_df_lr['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr_lr, tpr_lr, color='darkorange', lw=lw, label='LR ROC curve (area = %0.2f)' % roc_auc_lr) plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print("At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format(np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr))) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt %matplotlib inline def remove_prefix(text, prefix): return text[text.startswith(prefix) and len(prefix):] def plot_history(history): metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())])) for m in metrics: # summarize history for metric m plt.plot(history.history[m]) plt.plot(history.history['val_' + m]) plt.title(m, fontsize=18) plt.ylabel(m, fontsize=18) plt.xlabel('epoch', fontsize=18) plt.legend(['train', 'validation'], loc='best') plt.show() ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, 'users_neighborhood_anon.csv')) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat['hate'] = np.where(feat['hate']=='hateful', 1, np.where(feat['hate']=='normal', 0, 2)) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing!=0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(['hashtags'], axis=1, inplace=True) # Drop centrality based measures feat.drop(columns=['betweenness', 'eigenvector', 'in_degree', 'out_degree'], inplace=True) feat.drop(columns=['created_at'], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method='yeo-johnson', standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {'lines.linewidth': 3, 'figure.figsize':(12,6)} sns.set_context("paper", rc = sns_rc) sns.set_style("whitegrid", {'axes.grid' : False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=['user_id'], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data['hate']!=2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=['hate']) annotated_user_targets = annotated_users[['hate']] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split(annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print("Train data number of 0s {} and 1s {}".format(np.sum(train_targets==0), np.sum(train_targets==1))) print("Test data number of 0s {} and 1s {}".format(np.sum(test_targets==0), np.sum(test_targets==1))) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=['hate']) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight('balanced', np.unique(train_targets), train_targets[:,0]) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = 'graphsage' # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50; num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == 'graphsage': generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == 'gcn': generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets, ) elif model_type == 'gat': generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == 'graphsage': base_model = GraphSAGE( layer_sizes=[32, 32], generator=train_gen, bias=True, dropout=0.5, ) x_inp, x_out = base_model.default_model(flatten_output=True) prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gcn': base_model = GCN( layer_sizes=[32, 16], generator = generator, bias=True, dropout=0.5, activations=["elu", "elu"] ) x_inp, x_out = base_model.node_model() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gat': base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.node_model() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit_generator` method. ###Code class_weight = None if model_type == 'graphsage': class_weight=train_class_weights history = model.fit_generator( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate_generator(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict_generator(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions>0.5)*1).flatten() test_df = pd.DataFrame({"Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:,0]}) roc_auc = metrics.roc_auc_score(test_df['True'].values, test_df['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df['True'], test_df['Predicted_class']) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve(test_df['True'], test_df['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == 'graphsage': # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict_generator(generator=all_gen, ) elif model_type == 'gcn': # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) elif model_type == 'gat': # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed['label'] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14,8,)) ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"), cmap="jet", alpha=alpha) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title('{} visualization of embeddings for tweeter dataset'.format(transform.__name__), fontsize=24) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV(cv=5, class_weight=class_weight, max_iter=10000) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1]>0.5)*1).flatten() test_df_lr = pd.DataFrame({"Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:,0]}) roc_auc_lr = metrics.roc_auc_score(test_df_lr['True'].values, test_df_lr['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr['True'], test_df_lr['Predicted_class']) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(test_df_lr['True'], test_df_lr['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr_lr, tpr_lr, color='darkorange', lw=lw, label='LR ROC curve (area = %0.2f)' % roc_auc_lr) plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print("At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format(np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr))) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. Run the master version of this notebook: ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos] import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.default_model(flatten_output=True) prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.build() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.build() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt %matplotlib inline def remove_prefix(text, prefix): return text[text.startswith(prefix) and len(prefix):] def plot_history(history): metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())])) for m in metrics: # summarize history for metric m plt.plot(history.history[m]) plt.plot(history.history['val_' + m]) plt.title(m, fontsize=18) plt.ylabel(m, fontsize=18) plt.xlabel('epoch', fontsize=18) plt.legend(['train', 'validation'], loc='best') plt.show() ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, 'users_neighborhood_anon.csv')) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat['hate'] = np.where(feat['hate']=='hateful', 1, np.where(feat['hate']=='normal', 0, 2)) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing!=0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(['hashtags'], axis=1, inplace=True) # Drop centrality based measures feat.drop(columns=['betweenness', 'eigenvector', 'in_degree', 'out_degree'], inplace=True) feat.drop(columns=['created_at'], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method='yeo-johnson', standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {'lines.linewidth': 3, 'figure.figsize':(12,6)} sns.set_context("paper", rc = sns_rc) sns.set_style("whitegrid", {'axes.grid' : False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=['user_id'], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data['hate']!=2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=['hate']) annotated_user_targets = annotated_users[['hate']] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split(annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print("Train data number of 0s {} and 1s {}".format(np.sum(train_targets==0), np.sum(train_targets==1))) print("Test data number of 0s {} and 1s {}".format(np.sum(test_targets==0), np.sum(test_targets==1))) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=['hate']) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight('balanced', np.unique(train_targets), train_targets[:,0]) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = 'graphsage' # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50; num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == 'graphsage': generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == 'gcn': generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets, ) elif model_type == 'gat': generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == 'graphsage': base_model = GraphSAGE( layer_sizes=[32, 32], generator=train_gen, bias=True, dropout=0.5, ) x_inp, x_out = base_model.default_model(flatten_output=True) prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gcn': base_model = GCN( layer_sizes=[32, 16], generator = generator, bias=True, dropout=0.5, activations=["elu", "elu"] ) x_inp, x_out = base_model.node_model() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == 'gat': base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.node_model() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit_generator` method. ###Code class_weight = None if model_type == 'graphsage': class_weight=train_class_weights history = model.fit_generator( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate_generator(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict_generator(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions>0.5)*1).flatten() test_df = pd.DataFrame({"Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:,0]}) roc_auc = metrics.roc_auc_score(test_df['True'].values, test_df['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df['True'], test_df['Predicted_class']) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve(test_df['True'], test_df['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == 'graphsage': # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict_generator(generator=all_gen, ) elif model_type == 'gcn': # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) elif model_type == 'gat': # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed['label'] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14,8,)) ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"), cmap="jet", alpha=alpha) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title('{} visualization of embeddings for tweeter dataset'.format(transform.__name__), fontsize=24) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV(cv=5, class_weight=class_weight, max_iter=10000) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1]>0.5)*1).flatten() test_df_lr = pd.DataFrame({"Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:,0]}) roc_auc_lr = metrics.roc_auc_score(test_df_lr['True'].values, test_df_lr['Predicted_score'].values) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr['True'], test_df_lr['Predicted_class']) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(test_df_lr['True'], test_df_lr['Predicted_score'], pos_label=1) plt.figure(figsize=(12,6,)) lw = 2 plt.plot(fpr_lr, tpr_lr, color='darkorange', lw=lw, label='LR ROC curve (area = %0.2f)' % roc_auc_lr) plt.plot(fpr, tpr, color='darkblue', lw=lw, label='GNN ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate', fontsize=18) plt.ylabel('True Positive Rate', fontsize=18) plt.title('Receiver operating characteristic curve', fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print("At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format(np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr))) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon with graph machine learning Run the latest release of this notebook: We consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.1.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.1.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.1.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon with graph machine learning Run the latest release of this notebook: We consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Veličković et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.3.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.3.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. Run the master version of this notebook: ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos] # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.0.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.0.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Darin Notes* This example has features which are tabular DS like rather than NLP vectors which seems to prove that BOW vectors not required like with CORA examples.* data * this nb is from older version of stellagraph and the data is no longer available * which is too bad since it demose a different type of features.* PowerTransform to remove long tail to make all data gausian like and puts all features/attribs into same range.* This definately is a good example of using features for a tabular DS problem. Detection of Twitter users who use hateful lexicon with graph machine learning Run the latest release of this notebook: We consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Veličković et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code # install StellarGraph if running on Google Colab import sys # if 'google.colab' in sys.modules: # %pip install -q stellargraph[demos]==1.3.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg # try: # sg.utils.validate_notebook_version("1.3.0b") # except AttributeError: # raise ValueError( # f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." # ) from None sg.__version__ import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") data_dir ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon using graph machine learning with StellargraphWe consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Velickovic et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt %matplotlib inline def remove_prefix(text, prefix): return text[text.startswith(prefix) and len(prefix):] def plot_history(history): metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())])) for m in metrics: # summarize history for metric m plt.plot(history.history[m]) plt.plot(history.history['val_' + m]) plt.title(m, fontsize=18) plt.ylabel(m, fontsize=18) plt.xlabel('epoch', fontsize=18) plt.legend(['train', 'validation'], loc='best') plt.show() ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=train_gen, bias=True, dropout=0.5, ) x_inp, x_out = base_model.default_model(flatten_output=True) prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.build() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.build() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit_generator` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit_generator( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate_generator(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict_generator(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict_generator(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict_generator(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253 ###Markdown Detection of Twitter users who use hateful lexicon with graph machine learning Run the latest release of this notebook: We consider the use-case of identifying hateful users on Twitter motivated by the work in [1] and using the dataset also published in [1]. Classification is based on a graph based on users' retweets and attributes as related to their account activity, and the content of tweets.We pose identifying hateful users as a binary classification problem. We demonstrate the advantage of connected vs unconnected data in a semi-supervised setting with few training examples.For connected data, we use Graph Neural Network methods, GCN [2], GAT [3], and GraphSAGE [4] as implemented in the `stellargraph` library. We pose the problem of identifying hateful tweeter users as node attribute inference in graphs.**References**1. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. M. H. Ribeiro, P. H. Calais, Y. A. Santos, V. A. F. Almeida, and W. Meira Jr. arXiv preprint arXiv:1801.00317 (2017).2. Semi-Supervised Classification with Graph Convolutional Networks. T. Kipf, M. Welling. ICLR 2017. arXiv:1609.02907 3. Graph Attention Networks. P. Veličković et al. ICLR 20184. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. ###Code # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.3.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.3.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import numpy as np import seaborn as sns import itertools import os from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.linear_model import LogisticRegressionCV import stellargraph as sg from stellargraph.mapper import GraphSAGENodeGenerator, FullBatchNodeGenerator from stellargraph.layer import GraphSAGE, GCN, GAT from stellargraph import globalvar from tensorflow.keras import layers, optimizers, losses, metrics, Model, models from sklearn import preprocessing, feature_extraction from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Loading the data **Downloading the dataset:**The dataset for this demo was published in [1] and it is freely available to download from Kaggle [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter/home).The following is the description of the datasets:>This dataset contains a network of 100k users, out of which ~5k were annotated as hateful or>not. For each user, several content-related, network-related and activity related features>were provided. Additional files of hateful lexicon can be found [here]( https://github.com/manoelhortaribeiro/HatefulUsersTwitter/tree/master/data/extra)Download the dataset and then set the `data_dir` variable to point to the download location. ###Code data_dir = os.path.expanduser("~/data/hateful-twitter-users") ###Output _____no_output_____ ###Markdown First load and prepare the node featuresEach node in the graph is associated with a large number of features (also referred to as attributes). The list of features is given [here](https://www.kaggle.com/manoelribeiro/hateful-users-on-twitter). We repeated here for convenience.hate :("hateful"|"normal"|"other") if user was annotated as hateful, normal, or not annotated. (is_50|is_50_2) :bool whether user was deleted up to 12/12/17 or 14/01/18. (is_63|is_63_2) :bool whether user was suspended up to 12/12/17 or 14/01/18. (hate|normal)_neigh :bool is the user on the neighborhood of a (hateful|normal) user? [c_] (statuses|follower|followees|favorites)_count :int number of (tweets|follower|followees|favorites) a user has. [c_] listed_count:int number of lists a user is in. [c_] (betweenness|eigenvector|in_degree|outdegree) :float centrality measurements for each user in the retweet graph. [c_] *_empath :float occurrences of empath categories in the users latest 200 tweets. [c_] *_glove :float glove vector calculated for users latest 200 tweets. [c_] (sentiment|subjectivity) :float average sentiment and subjectivity of users tweets. [c_] (time_diff|time_diff_median) :float average and median time difference between tweets. [c_] (tweet|retweet|quote) number :float percentage of direct tweets, retweets and quotes of an user. [c_] (number urls|number hashtags|baddies|mentions) :float number of bad words|mentions|urls|hashtags per tweet in average. [c_] status length :float average status length. hashtags :string all hashtags employed by the user separated by spaces. **Notice** that c_ are attributes calculated for the 1-neighborhood of a user in the retweet network (averaged out). First, we are going to load the user features and prepare them for machine learning. ###Code users_feat = pd.read_csv(os.path.join(data_dir, "users_neighborhood_anon.csv")) users_feat.head() ###Output _____no_output_____ ###Markdown Let's have a look at the distribution of hateful, normal (not hateful), and other (unknown) users in the dataset ###Code print("Initial hateful/normal users distribution") print(users_feat.shape) print(users_feat.hate.value_counts()) ###Output Initial hateful/normal users distribution (100386, 1039) other 95415 normal 4427 hateful 544 Name: hate, dtype: int64 ###Markdown There is a clear imbalance on the number of users tagged as hateful vs normal and unknown. Data cleaning and preprocessing The dataset as given includes a large number of graph related features that are manually extracted. Since we are going to employ modern graph neural networks methods for classification, we are going to drop these manually engineered features. The power of Graph Neural Networks stems from their ability to learn useful graph-related features eliminating the need for manual feature engineering. ###Code def data_cleaning(feat): feat = feat.drop(columns=["hate_neigh", "normal_neigh"]) # Convert target values in hate column from strings to integers (0,1,2) feat["hate"] = np.where( feat["hate"] == "hateful", 1, np.where(feat["hate"] == "normal", 0, 2) ) # missing information number_of_missing = feat.isnull().sum() number_of_missing[number_of_missing != 0] # Replace NA with 0 feat.fillna(0, inplace=True) # droping info about suspension and deletion as it is should not be use din the predictive model feat.drop(feat.columns[feat.columns.str.contains("is_")], axis=1, inplace=True) # drop glove features feat.drop(feat.columns[feat.columns.str.contains("_glove")], axis=1, inplace=True) # drop c_ features feat.drop(feat.columns[feat.columns.str.contains("c_")], axis=1, inplace=True) # drop sentiment features for now feat.drop(feat.columns[feat.columns.str.contains("sentiment")], axis=1, inplace=True) # drop hashtag feature feat.drop(["hashtags"], axis=1, inplace=True) # Drop centrality based measures feat.drop( columns=["betweenness", "eigenvector", "in_degree", "out_degree"], inplace=True ) feat.drop(columns=["created_at"], inplace=True) return feat node_data = data_cleaning(users_feat) ###Output _____no_output_____ ###Markdown Of the original **1037** node features, we are keeping only **204** that are based on a user's attributes and tweet lexicon. We have removed any manually engineered graph features since the graph neural network algorithms we are going to use will automatically determine the best features to use during training. ###Code node_data.shape node_data.head() ###Output _____no_output_____ ###Markdown The continous features in our dataset have distributions with very long tails. We apply normalization to correct for this. ###Code # Ignore the first two columns because those are user_id and hate (the target variable) df_values = node_data.iloc[:, 2:].values pt = preprocessing.PowerTransformer(method="yeo-johnson", standardize=True) df_values_log = pt.fit_transform(df_values) ###Output _____no_output_____ ###Markdown Let's have a look at one of the normalized features before and after the power transform was applied.The feature we are going to look at is a user's number of followers. ###Code sns_rc = {"lines.linewidth": 3, "figure.figsize": (12, 6)} sns.set_context("paper", rc=sns_rc) sns.set_style("whitegrid", {"axes.grid": False}) sns.kdeplot(df_values[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers, before Power Transform", fontsize=18) sns.kdeplot(df_values_log[:, 1]) s = plt.ylabel("Density", fontsize=18) s = plt.xlabel("Feature value", fontsize=18) s = plt.title("Number of followers after Power Transform", fontsize=18) ###Output _____no_output_____ ###Markdown Feature normalization looks like it is doing the right thing as the raw features have long tails that are eliminated after applying the power transform. So let us use the normalized features from now on. ###Code node_data.iloc[:, 2:] = df_values_log # Set the dataframe index to be the same as the user_id and drop the user_id columns node_data.index = node_data.index.map(str) node_data.drop(columns=["user_id"], inplace=True) ###Output _____no_output_____ ###Markdown Node features are now ready for machine learning. ###Code node_data.head() ###Output _____no_output_____ ###Markdown Next load the graphNow that we have the node features prepared for machine learning, let us load the retweet graph. ###Code g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "users.edges"))) g_nx.number_of_nodes(), g_nx.number_of_edges() ###Output _____no_output_____ ###Markdown The graph has just over 100k nodes and approximately 2.2m edges.We aim to train a graph neural network model that will predict the "hate"attribute on the nodes.For computation convenience, we have mapped the target labels **normal**, **hateful**, and **other** to the numeric values **0**, **1**, and **2** respectively. ###Code print(set(node_data["hate"])) ###Output {0, 1, 2} ###Markdown Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to split our data into training and test sets.The total number of annotated nodes is very small when compared to the total number of nodes in the graph. We are only going to use 15% of the annotated nodes for training and the remaining 85% of nodes for testing.First, we are going to select the subset of nodes that are annotated as hateful or normal. These will be the nodes that have 'hate' values that are either 0 or 1. ###Code # choose the nodes annotated with normal or hateful classes annotated_users = node_data[node_data["hate"] != 2] annotated_users.head() annotated_users.shape annotated_user_features = annotated_users.drop(columns=["hate"]) annotated_user_targets = annotated_users[["hate"]] ###Output _____no_output_____ ###Markdown There are 4971 annoted nodes out of a possible, approximately, 100k nodes. ###Code print(annotated_user_targets.hate.value_counts()) # split the data train_data, test_data, train_targets, test_targets = train_test_split( annotated_user_features, annotated_user_targets, test_size=0.85, random_state=101 ) train_targets = train_targets.values test_targets = test_targets.values print("Sizes and class distributions for train/test data") print("Shape train_data {}".format(train_data.shape)) print("Shape test_data {}".format(test_data.shape)) print( "Train data number of 0s {} and 1s {}".format( np.sum(train_targets == 0), np.sum(train_targets == 1) ) ) print( "Test data number of 0s {} and 1s {}".format( np.sum(test_targets == 0), np.sum(test_targets == 1) ) ) train_targets.shape, test_targets.shape train_data.shape, test_data.shape ###Output _____no_output_____ ###Markdown We are going to use 745 nodes for training and 4226 nodes for testing. ###Code # choosing features to assign to a graph, excluding target variable node_features = node_data.drop(columns=["hate"]) ###Output _____no_output_____ ###Markdown Dealing with imbalanced dataBecause the training data exhibit high imbalance, we introduce class weights. ###Code from sklearn.utils.class_weight import compute_class_weight class_weights = compute_class_weight( "balanced", np.unique(train_targets), train_targets[:, 0] ) train_class_weights = dict(zip(np.unique(train_targets), class_weights)) train_class_weights ###Output _____no_output_____ ###Markdown Our data is now ready for machine learning.Node features are stored in the Pandas DataFrame `node_features`.The graph in networkx format is stored in the variable `g_nx`. Specify global parametersHere we specify some parameters that control the type of model we are going to use. For example, we specify the base model type, e.g., GCN, GraphSAGE, etc, as well as model-specific parameters. ###Code model_type = "graphsage" # Can be either gcn, gat, or graphsage if model_type == "graphsage": # For GraphSAGE model batch_size = 50 num_samples = [20, 10] epochs = 30 # The number of training epochs elif model_type == "gcn": # For GCN model epochs = 20 # The number of training epochs elif model_type == "gat": # For GAT model layer_sizes = [8, 1] attention_heads = 8 epochs = 20 # The number of training epochs ###Output _____no_output_____ ###Markdown Creating the base graph machine learning model in Keras Now create a `StellarGraph` object from the `NetworkX` graph and the node features and targets. It is `StellarGraph` objects that we use in this library to perform machine learning tasks on. ###Code G = sg.StellarGraph.from_networkx(g_nx, node_features=node_features) ###Output _____no_output_____ ###Markdown To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task. For training we map only the training nodes returned from our splitter and the target values. ###Code if model_type == "graphsage": generator = GraphSAGENodeGenerator(G, batch_size, num_samples) train_gen = generator.flow(train_data.index, train_targets, shuffle=True) elif model_type == "gcn": generator = FullBatchNodeGenerator(G, method="gcn", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) elif model_type == "gat": generator = FullBatchNodeGenerator(G, method="gat", sparse=True) train_gen = generator.flow(train_data.index, train_targets,) ###Output _____no_output_____ ###Markdown Next we create the GNN model. We need to specify model-specific parameters based on whether we want to use GCN, GAT, or GraphSAGE. ###Code if model_type == "graphsage": base_model = GraphSAGE( layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5, ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gcn": base_model = GCN( layer_sizes=[32, 16], generator=generator, bias=True, dropout=0.5, activations=["elu", "elu"], ) x_inp, x_out = base_model.in_out_tensors() prediction = layers.Dense(units=1, activation="sigmoid")(x_out) elif model_type == "gat": base_model = GAT( layer_sizes=layer_sizes, attn_heads=attention_heads, generator=generator, bias=True, in_dropout=0.5, attn_dropout=0.5, activations=["elu", "sigmoid"], normalize=None, ) x_inp, prediction = base_model.in_out_tensors() ###Output _____no_output_____ ###Markdown Create a Keras model Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `base_model` and outputs being the predictions from the softmax layer. ###Code model = Model(inputs=x_inp, outputs=prediction) ###Output _____no_output_____ ###Markdown We compile our Keras model to use the `Adam` optimiser and the binary cross entropy loss. ###Code model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.binary_crossentropy, metrics=["acc"], ) model ###Output _____no_output_____ ###Markdown Train the model, keeping track of its loss and accuracy on the training set, and its performance on the test set during the training. We don't use the test set during training but only for measuring the trained model's generalization performance. ###Code test_gen = generator.flow(test_data.index, test_targets) ###Output _____no_output_____ ###Markdown Now we can train the model by calling the `fit` method. ###Code class_weight = None if model_type == "graphsage": class_weight = train_class_weights history = model.fit( train_gen, epochs=epochs, validation_data=test_gen, verbose=0, shuffle=False, class_weight=class_weight, ) sg.utils.plot_history(history) ###Output _____no_output_____ ###Markdown Model Evaluation Now we have trained the model, let's evaluate it on the test set.We are going to consider 4 evaluation metrics calculated on the test set: Accuracy, Area Under the ROC curve (AU-ROC), the ROC curve, and the confusion table. Accuracy ###Code test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ###Output Test Set Metrics: loss: 0.3424 acc: 0.8874 ###Markdown AU-ROCLet's use the trained GNN model to make a prediction for each node in the graph.Then, select only the predictions for the nodes in the test set and calculate the AU-ROC as another performance metric in addition to the accuracy shown above. ###Code all_nodes = node_data.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen).squeeze()[..., np.newaxis] all_predictions.shape all_predictions_df = pd.DataFrame(all_predictions, index=node_data.index) ###Output _____no_output_____ ###Markdown Let's extract the predictions for the test data only. ###Code test_preds = all_predictions_df.loc[test_data.index, :] test_preds.shape ###Output _____no_output_____ ###Markdown The predictions are the probability of the true class that in this case is the probability of a user being hateful. ###Code test_preds.head() test_predictions = test_preds.values test_predictions_class = ((test_predictions > 0.5) * 1).flatten() test_df = pd.DataFrame( { "Predicted_score": test_predictions.flatten(), "Predicted_class": test_predictions_class, "True": test_targets[:, 0], } ) roc_auc = metrics.roc_auc_score(test_df["True"].values, test_df["Predicted_score"].values) print("The AUC on test set:\n") print(roc_auc) ###Output The AUC on test set: 0.8845494008100929 ###Markdown Confusion table ###Code pd.crosstab(test_df["True"], test_df["Predicted_class"]) ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr, tpr, thresholds = metrics.roc_curve( test_df["True"], test_df["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Visualisation of node embeddingsEvaluate node embeddings as activations of the output of one of the graph convolutional or aggregation layers in the Keras model, and visualise them, coloring nodes by their subject label.You can find the index of the layer of interest by calling `model.layers`. First, create a Keras model for calculating the embeddings ###Code model.layers if model_type == "graphsage": # For GraphSAGE, we are going to use the output activations # of the second GraphSAGE layer as the node embeddings # x_inp, prediction emb_model = Model(inputs=x_inp, outputs=model.layers[-4].output) emb = emb_model.predict(generator=all_gen,) elif model_type == "gcn": # For GCN, we are going to use the output activations of # the second GCN layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) elif model_type == "gat": # For GAT, we are going to use the output activations of the # first Graph Attention layer as the node embeddings emb_model = Model(inputs=x_inp, outputs=model.layers[6].output) emb = emb_model.predict(generator=all_gen) emb.shape emb = emb.squeeze() if model_type == "graphsage": emb_all_df = pd.DataFrame(emb, index=node_data.index) elif model_type == "gcn" or model_type == "gat": emb_all_df = pd.DataFrame(emb, index=G.nodes()) ###Output _____no_output_____ ###Markdown Select the embeddings for the test set. We are only going to visualise the test set embeddings. ###Code emb_test = emb_all_df.loc[test_data.index, :] ###Output _____no_output_____ ###Markdown Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label ###Code X = emb_test y = test_targets X.shape transform = TSNE # or use PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=test_data.index) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(14, 8,)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of embeddings for tweeter dataset".format(transform.__name__), fontsize=24, ) plt.show() ###Output _____no_output_____ ###Markdown The node embeddings shown above indicate that the majority of hateful users tend to cluster together. However, some normal users are also in the same neighbourhood and these will be difficult to distinguish from hateful ones. Similarly, there is a small number of hateful users dispersed among normal users and these will also be difficult classify correctly. Predictions using Logistic RegressionFinally, we train a Logistic Regression model on the same train and test data but this time ignoring the graph structure and focusing entirely on the node features. The variables `train_data`, `test_data`, `train_targets`, and `test_targets`, hold the data we need to train the Logistic Regression classifier. ###Code lr = LogisticRegressionCV( cv=5, class_weight=class_weight, max_iter=10000 ) # Let's use the default parameters lr.fit(train_data, train_targets.ravel()) ###Output _____no_output_____ ###Markdown We can now use the trained model to predict the test data ###Code test_preds_lr = lr.predict_proba(test_data) test_preds_lr.shape ###Output _____no_output_____ ###Markdown Accuracy ###Code lr.score(test_data, test_targets) ###Output _____no_output_____ ###Markdown Calculate AU-ROC metric ###Code test_predictions_class_lr = ((test_preds_lr[:, 1] > 0.5) * 1).flatten() test_df_lr = pd.DataFrame( { "Predicted_score": test_preds_lr[:, 1].flatten(), "Predicted_class": test_predictions_class_lr, "True": test_targets[:, 0], } ) roc_auc_lr = metrics.roc_auc_score( test_df_lr["True"].values, test_df_lr["Predicted_score"].values ) print("The AUC on test set:\n") print(roc_auc_lr) ###Output The AUC on test set: 0.8090002175324247 ###Markdown The confusion table ###Code pd.crosstab(test_df_lr["True"], test_df_lr["Predicted_class"]) ###Output _____no_output_____ ###Markdown The ROC curve ###Code fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve( test_df_lr["True"], test_df_lr["Predicted_score"], pos_label=1 ) plt.figure(figsize=(12, 6,)) lw = 2 plt.plot( fpr_lr, tpr_lr, color="darkorange", lw=lw, label="LR ROC curve (area = %0.2f)" % roc_auc_lr, ) plt.plot( fpr, tpr, color="darkblue", lw=lw, label="GNN ROC curve (area = %0.2f)" % roc_auc ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate", fontsize=18) plt.ylabel("True Positive Rate", fontsize=18) plt.title("Receiver operating characteristic curve", fontsize=18) plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Let's have a closer look at the True Positive Rate for the GNN and Logistic Regression models at 2% False Positive Rate. ###Code print( "At 2% FPR, GNN TPR={:.3f}, LR TPR={:.3f}".format( np.interp(0.02, fpr, tpr), np.interp(0.02, fpr_lr, tpr_lr) ) ) ###Output At 2% FPR, GNN TPR=0.378, LR TPR=0.253
lab1/lab1_diagonal.ipynb
###Markdown Images reading&resizing ###Code img_l = np.asanyarray(Image.open("hangerL-small.png").convert("RGB")).astype(np.int) img_r = np.asanyarray(Image.open("hangerR-small.png").convert("RGB")).astype(np.int) #img_l = img_l[:-10] #img_r = img_r[10:] img_l.shape img_r.shape ###Output _____no_output_____ ###Markdown Loss functions definition ###Code def g_loss_l1(a, b, par=0): return abs(a) + abs(b) def g_loss_l2(a, b, par=0): return sqrt(abs(a)**2 + abs(b)**2) def g_loss_min_l1(a, b, par): return min(par, abs(a) + abs(b)) def g_loss_min_l2(a, b, par): return min(par, sqrt(abs(a)**2 + abs(b)**2)) def f_loss_l1(a): return np.sum(np.abs(a), axis=-1) def f_loss_l2(a): return np.sqrt(np.sum(a**2, axis=-1)) ###Output _____no_output_____ ###Markdown Params definition ###Code kg_max=30 kv_max=40 kv_half = kv_max//2 alpha=40 g_loss = [g_loss_l1, g_loss_l2, g_loss_min_l1, g_loss_min_l2][3] f_loss = [f_loss_l1, f_loss_l2][1] b = 20 ###Output _____no_output_____ ###Markdown Precomputing node2node losses ###Code height = img_r.shape[0] length = img_r.shape[1] g = np.zeros((kg_max*kv_max, kg_max*kv_max), dtype=np.float32) for k1 in range(kg_max*kv_max): for k2 in range(kg_max*kv_max): kg1 = k1 % kg_max kv1 = k1 // kg_max kg2 = k2 % kg_max kv2 = k2 // kg_max g[k1,k2] = g_loss(kg1 - kg2,kv1 - kv2, b) g = alpha*g kg1 kv1 ###Output _____no_output_____ ###Markdown Minimal path finding and recording ###Code start = timer() img_shift = np.ones((height, length)) for i in range(img_shift.shape[0]): #start = timer() f = np.inf*np.ones((length, kg_max*kv_max), dtype=np.float32) for k in range(1, kg_max*kv_max): kg1 = k % kg_max kv1 = k // kg_max - kv_half if i+kv1 < height and i+kv1 >= 0: f[:length-kg1, k] = f_loss(img_r[i,:length-kg1] - img_l[i+kv1,kg1:]) #end = timer() #print('\n\nconstruct graph:',timedelta(seconds=end-start)) #start = timer() pass_to_prev = np.zeros(f.shape, dtype=np.int) for p in range(1, length): ta = f[p-1, :] + g ind = np.argmin(ta, axis=1) f[p, :] += np.min(ta, axis=1) pass_to_prev[p, :] = ind #end = timer() #print('\n\nfind path:',timedelta(seconds=end-start)) #start = timer() line_shift = np.ones(length, dtype=np.int) line_shift[-1] = np.argmin(f[-1,:]) for p in reversed(range(length-1)): line_shift[p] = pass_to_prev[p+1, line_shift[p+1]] img_shift[i] = line_shift #end = timer() #print('\n\nrecover path:',timedelta(seconds=end-start)) print(i, end=" ") #print("\n"+20*"=") end = timer() print('\n\n\nTime per image',timedelta(seconds=end-start)) ###Output 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 Time per image 0:02:52.350299 ###Markdown output image transforming and shifts decoding ###Code img_shift2 = np.zeros((height, length, 3), dtype=np.int) img_shift2[:,:,0] = img_shift % kg_max # horizontal img_shift2[:,:,1] = img_shift // kg_max - kv_half # vertical np.save("img_shift.npy", img_shift2) img_shift3 = img_shift2.copy() img_shift3 = np.linalg.norm(img_shift3, axis=-1) img_shift3 = 255*img_shift3/img_shift3.max() #img_shift2 = 255*img_shift2/img_shift2.max() #img_shift2[:,:,0] = 255*img_shift2[:,:,0]/img_shift2[:,:,0].max() #img_shift2[:,:,1] = 255*img_shift2[:,:,1]/img_shift2[:,:,1].max() img_shift2[:,:,0].max() ###Output _____no_output_____ ###Markdown Horizontal shift (normed at 255) ###Code Image.fromarray((255*img_shift2[:,:,0]/img_shift2[:,:,0].max()).astype(np.uint8)).resize((800,600), Image.BICUBIC) ###Output _____no_output_____ ###Markdown Vertical shift (normed at 255) ###Code Image.fromarray((255*img_shift2[:,:,1]/img_shift2[:,:,1].max()).astype(np.uint8)).resize((800,600), Image.BICUBIC) ###Output _____no_output_____ ###Markdown Color shift image (normed at 255) ###Code Image.fromarray(img_shift2.astype(np.uint8)).resize((800,600), Image.BICUBIC) ###Output _____no_output_____ ###Markdown Norm maps (normed at 255) ###Code Image.fromarray(img_shift3.astype(np.uint8)).resize((800,600), Image.BICUBIC) ###Output _____no_output_____ ###Markdown Original image ###Code Image.fromarray(img_l.astype(np.uint8)).resize((800,600)) import matplotlib.pyplot as plt plt.hist(img_shift2[:,:,1].reshape(-1)) ###Output _____no_output_____
aulas/aula06_01_preprocessamento.ipynb
###Markdown Pré-processamentoO objetivo de pré-processar os dados é **aprimorar** o desempenho dos modelos de ML. Assim, um modelo treinado com dados pré-processados deve ter desempenho no mínimo equivalente a um modelo treinado com dados crus. Por que pré-processar os dados?Dados no mundo real vêm* incompletos ```person.occupation=''```* ruidosos ```person.salary=-10; person.age=999``` esses dados também podem ser chamados de *outliers*: são pontos que estão muito distantes da distribuição geral dos dados e em geral indicam algum erro na coleta ou no registro do dado* inconsistentes ```personA.grade=B; personB.grade=8``` pode acontecer ao integrar bases de dados diferentes E os modelos de ML são sensíveis a tudo isso.Além disso, lembrando que o problema de aprendizado é um problema de otimização, precisamos aplicar transormações nos dados de forma a facilitar a convergência dos otimizadores. As duas principais etapas de pré-processamento dos dados são* limpeza: preenchimento de valores faltantes, remoção de *outliers*, resolução de inconsistências* transformação: adequação dos dados para serem usados pelos modelos de ML; inclui normalização, agregação, redução e extração de *features*Para a primeira etapa, limpeza, é necessário algum conhecimento sobre a natureza dos dados (por exemplo, saber que idade e data de nascimento são *features* equivalentes). Dependendo da natureza dos dados, é recomendável conversar com especialistas. Essa é uma etapa que parece simples mas costuma ser trabalhosa. Não deve ser menosprezada :)Aqui, vamos usar um dataset já limpo e focar na tarefa de **transformar os dados** de forma que possam ser adequadamente interpretados pelos modelos. Nosso datasetVamos usar o dataset *Boston Housing Prices*, organizado pelo grupo de *Machine Learning* da *University of California Irvine* e disponível para baixar diretamente através do pacote `sklearn`.Importando os pacotes que serão usados: ###Code import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_boston from sklearn.feature_selection import SelectKBest, f_regression from sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler ###Output _____no_output_____ ###Markdown Usamos a função `load_boston` do `sklearn` para carregar os dados e ver a descrição das variáveis, e depois o pacote `pandas` para facilitar a manipulação dos dados tabulares. ###Code data = load_boston() print(data.keys()) print(data.DESCR) df = pd.DataFrame(data.data, columns=data.feature_names) target = data.target df.head() ###Output _____no_output_____ ###Markdown Uma pipeline de pré-processamento de dados básica deve incluir* tratamento de variáveis categóricas* normalização/scaling* seleção de *features* Tratamento de variáveis categóricasVariáveis categóricas são variáveis discretizadas, que representam atributos qualitativos (por exemplo, raça) ou intervalos de atributos quantitativos (por exemplo, faixa etária).Ao incluir essas variáveis num modelo de ML, é importante evitar introduzir relações de ordem inexistentes. Por exemplo, se há uma variável faixa etária que recebe 0 para [0-10], 1 para [11-20], 2 para [21-30], e assim por diante, é razoável que existam relações do tipo $0 amarelo > preto. Este procedimento é chamado de **one-hot encoding**.No nosso dataset, as seguintes variáveis são consideradas categóricas:* CHAS: variável binária; não precisa de tratamento* AGE: variável discretizada com 9 valores possíveis; podemos fazer one-hot encoding ###Code df.CHAS.unique() df.RAD.unique() ###Output _____no_output_____ ###Markdown O one-hot encoding pode ser feito pelo `pandas`: ###Code # one-hot encoding com pandas rad_onehot = pd.get_dummies(df.RAD, prefix='RAD') df = pd.concat([df, rad_onehot], axis=1) df.head() ###Output _____no_output_____ ###Markdown Normalização/scaling É necessário para padronizar o valor das *features* quando elas estão em intervalos muito variados (por exemplo, idade e salário têm ordens de grandeza diferentes). Mantendo todas as *features* em intervalos de valores similares, a convergência dos modelos é acelerada e, em alguns casos, o desempenho final é melhorado.As duas principais formas de se normalizar os dados são:Min max scaling$\begin{align}x_{scaled} = \frac{x - x_{min}}{x_{max} - x_{min}}\end{align}$Standard scaling$\begin{align}x_{scaled} = \frac{x - \mu}{\sigma}\end{align}$ Note que a normalização é feita para cada uma das *features*, ou seja, para cada coluna do vetor de *features* $X$.O `sklearn` fornece funções para ambos os casos: ###Code scaler = MinMaxScaler() dff = df.copy() dff[dff.columns] = scaler.fit_transform(df.values) dff.describe() scaler = StandardScaler() dff = df.copy() dff[dff.columns] = scaler.fit_transform(df.values) dff.describe() ###Output _____no_output_____ ###Markdown Seleção de featuresDado um dataset de *shape* $M x N$ (M amostras, N features), uma *rule of thumb* para um tamanho de dataset adequado é $M \geq 10N$. Nosso dataset tem shape 506x13, de um tamanho adequado, então não é preciso selecionar *features*.Mas e se tivéssemos poucas amostras ou muitas *features*? Nesses casos, é necessário **extrair** ou **selecionar** *features* que contenham as informações mais relevantes.Uma forma de fazer isso é avaliar as correlações entre *features* e *target* e descartar *features* sejam menos correlacionadas com o *target*. O método `SelectKBest` do `sklearn` faz isso. ###Code fs = SelectKBest(score_func=f_regression, k=10) X_selected = fs.fit_transform(dff, target) cols = fs.get_support() names = dff.columns.values[cols] scores = fs.scores_[cols] names_scores = list(zip(names, scores)) names_scores ###Output _____no_output_____
draft_notebooks/test_data_analysis.ipynb
###Markdown Test data analysis ###Code # df = pd.read_pickle('data/test_data_385_articles.bz2', compression='bz2') # len(df['page_id'].unique()) df = pd.read_csv('data/new_test_385_data.csv') len(df['page_id'].unique()) df_1000 = pd.read_excel(r'data/1000.xlsx') len(df_1000['page_id'].unique()) df['category'] = df.apply(lambda x: 'Politics' if x.page_id == 40034240 or x.page_id == 27242 else x.category, axis=1) df['category'] = df.apply(lambda x: 'Language' if x.page_id == 7212 else x.category, axis=1) page_ids_385 = df['page_id'] page_ids_1000 = df_1000['page_id'] df_385 = df_1000[df_1000['page_id'].isin(page_ids_385)] df_385.shape columns = list(df_385.columns) data = [] data.append([40034240, 'PEOPLE Party', 'Politics']) data.append([27242, 'Politics of Samoa', 'Politics']) data.append([7212, 'Creed', 'Language']) df_3 = pd.DataFrame(data=data, columns=columns) final_df_385 = pd.concat([df_385, df_3]) final_df_385.shape final_df_385.to_csv(r'data/new_385.csv', index=False) df.to_pickle(path='data/new_test_data_385_articles.bz2', compression='bz2') ###Output _____no_output_____
_notebooks/2020-10-18-Q-Learning.ipynb
###Markdown "Reinforcement Learning - Epsilon Greedy"> "Reinforcement Learning Balancing Cartpole with Epsilon Greedy"- author: Bhargav Lad- toc: true - badges: true- comments: true- image: images/q.jpg- categories: [ jupyter,Reinforcement-Learning,CartPole,q-learning,epsilon-greedy] ###Code import sys import random import time import gym import numpy as np from IPython import display from base64 import b64decode env = gym.make('CartPole-v1') ###Output _____no_output_____ ###Markdown discretizing continuous valueAs the state of the environment has continous values we need to discretize them inorder to work with it. Here we set different bin size for each variable.The `get_discret_state` function will take in the continous values of the state and convert it to discret values based on our bining. ###Code bin_sz = [2,2,11,5] discreeting = [] for i in range(env.observation_space.high.shape[0]): bins = np.linspace(env.observation_space.low[i],env.observation_space.high[i],bin_sz[i]) bins = np.sort(np.append(bins, [0])) discreeting.append(bins) def get_discret_state(state): dis = [] for i,n in enumerate(state): dis.append(np.digitize(n,discreeting[i])) # discritize based on bins return tuple([x for x in dis]) ###Output _____no_output_____ ###Markdown Initialize Q tableWe initialize the Q table with Random values ###Code q_table = np.random.uniform(low=0,high=1,size=tuple([x.shape[0] for x in discreeting]+[env.action_space.n])) ###Output _____no_output_____ ###Markdown ParametersThese are the parameter values we set for our policy ###Code num_episodes = 4*60000 # total episodes steps_per_episode = 500 discount = 0.97 # discount lr =1e-3 # learning rate min_lr = 1e-4 max_lr = 1e-2 lr_decay = 0.6 # learning rate decay factor explore_rate = 0.1 # exploration rate max_explore = 1.0 min_explore = 0.1 decay = 0.03 # decay factor for exploration rate ###Output _____no_output_____ ###Markdown Epsilon Greedy Policy ![](images/equation.png) ###Code rewards =[] for episode in range(num_episodes): state = get_discret_state(env.reset()) done=False curr_reward=0 for step in range(steps_per_episode): # print(state) explore_rate_threshold = np.random.uniform(0,1) # if greater than rate then exploit if explore_rate_threshold > explore_rate: action = np.argmax(q_table[state]) else: action = env.action_space.sample() # step through with action new_state, reward, done, info = env.step(action) new_state = get_discret_state(new_state) # Update q table state_action = tuple(list(state)+[action]) # print(new_state) # print(q_table[new_state]) q_table[state_action] = (1-lr)*q_table[state_action] + lr*(reward + discount * np.max(q_table[new_state])) curr_reward+=reward state = new_state if done: break # update explore_rate and learning rate for next episode explore_rate = min_explore + (max_explore-min_explore)*np.exp(-decay*episode) lr = max_lr + (max_lr -min_lr)*np.exp(-lr_decay*episode) print(f"Episode {episode} avg: {np.array(curr_reward).mean()}") print("Done") np.save("q_table_weights",q_table) ###Output Episode 0 avg: 10.0 Episode 1 avg: 14.0 Episode 2 avg: 28.0 Episode 3 avg: 42.0 Episode 4 avg: 13.0 Episode 5 avg: 12.0 Episode 6 avg: 15.0 Episode 7 avg: 31.0 Episode 8 avg: 61.0 Episode 9 avg: 21.0 Episode 10 avg: 20.0 Episode 11 avg: 17.0 Episode 12 avg: 15.0 Episode 13 avg: 20.0 Episode 14 avg: 31.0 Episode 15 avg: 27.0 Episode 16 avg: 13.0 Episode 17 avg: 15.0 Episode 18 avg: 18.0 Episode 19 avg: 27.0 Episode 20 avg: 13.0 Episode 21 avg: 13.0 Episode 22 avg: 21.0 Episode 23 avg: 10.0 Episode 24 avg: 23.0 Episode 25 avg: 9.0 Episode 26 avg: 23.0 Episode 27 avg: 10.0 Episode 28 avg: 13.0 Episode 29 avg: 16.0 Episode 30 avg: 10.0 Episode 31 avg: 19.0 Episode 32 avg: 10.0 Episode 33 avg: 10.0 Episode 34 avg: 20.0 Episode 35 avg: 14.0 Episode 36 avg: 44.0 Episode 37 avg: 18.0 Episode 38 avg: 14.0 Episode 39 avg: 18.0 Episode 40 avg: 13.0 Episode 41 avg: 16.0 Episode 42 avg: 13.0 Episode 43 avg: 37.0 Episode 44 avg: 12.0 Episode 45 avg: 10.0 Episode 46 avg: 12.0 Episode 47 avg: 17.0 Episode 48 avg: 9.0 Episode 49 avg: 9.0 Episode 50 avg: 13.0 Episode 51 avg: 17.0 Episode 52 avg: 29.0 Episode 53 avg: 10.0 Episode 54 avg: 20.0 Episode 55 avg: 14.0 Episode 56 avg: 12.0 Episode 57 avg: 12.0 Episode 58 avg: 11.0 Episode 59 avg: 21.0 Episode 60 avg: 9.0 Episode 61 avg: 11.0 Episode 62 avg: 15.0 Episode 63 avg: 14.0 Episode 64 avg: 19.0 Episode 65 avg: 13.0 Episode 66 avg: 20.0 Episode 67 avg: 11.0 Episode 68 avg: 9.0 Episode 69 avg: 12.0 Episode 70 avg: 12.0 Episode 71 avg: 12.0 Episode 72 avg: 10.0 Episode 73 avg: 17.0 Episode 74 avg: 10.0 Episode 75 avg: 12.0 Episode 76 avg: 15.0 Episode 77 avg: 15.0 Episode 78 avg: 11.0 Episode 79 avg: 11.0 Episode 80 avg: 10.0 Episode 81 avg: 16.0 Episode 82 avg: 16.0 Episode 83 avg: 10.0 Episode 84 avg: 15.0 Episode 85 avg: 31.0 Episode 86 avg: 17.0 Episode 87 avg: 11.0 Episode 88 avg: 11.0 Episode 89 avg: 10.0 Episode 90 avg: 41.0 Episode 91 avg: 13.0 Episode 92 avg: 14.0 Episode 93 avg: 12.0 Episode 94 avg: 11.0 Episode 95 avg: 10.0 Episode 96 avg: 9.0 Episode 97 avg: 10.0 Episode 98 avg: 11.0 Episode 99 avg: 13.0 Episode 100 avg: 11.0 Episode 101 avg: 10.0 Episode 102 avg: 12.0 Episode 103 avg: 10.0 Episode 104 avg: 11.0 Episode 105 avg: 11.0 Episode 106 avg: 19.0 Episode 107 avg: 31.0 Episode 108 avg: 10.0 Episode 109 avg: 10.0 Episode 110 avg: 11.0 Episode 111 avg: 11.0 Episode 112 avg: 8.0 Episode 113 avg: 26.0 Episode 114 avg: 10.0 Episode 115 avg: 10.0 Episode 116 avg: 10.0 Episode 117 avg: 167.0 Episode 118 avg: 26.0 Episode 119 avg: 9.0 Episode 120 avg: 13.0 Episode 121 avg: 10.0 Episode 122 avg: 9.0 Episode 123 avg: 11.0 Episode 124 avg: 9.0 Episode 125 avg: 10.0 Episode 126 avg: 10.0 Episode 127 avg: 12.0 Episode 128 avg: 13.0 Episode 129 avg: 14.0 Episode 130 avg: 22.0 Episode 131 avg: 11.0 Episode 132 avg: 15.0 Episode 133 avg: 40.0 Episode 134 avg: 20.0 Episode 135 avg: 9.0 Episode 136 avg: 9.0 Episode 137 avg: 9.0 Episode 138 avg: 11.0 Episode 139 avg: 11.0 Episode 140 avg: 11.0 Episode 141 avg: 19.0 Episode 142 avg: 9.0 Episode 143 avg: 13.0 Episode 144 avg: 13.0 Episode 145 avg: 15.0 Episode 146 avg: 19.0 Episode 147 avg: 18.0 Episode 148 avg: 10.0 Episode 149 avg: 19.0 Episode 150 avg: 11.0 Episode 151 avg: 11.0 Episode 152 avg: 9.0 Episode 153 avg: 9.0 Episode 154 avg: 11.0 Episode 155 avg: 18.0 Episode 156 avg: 10.0 Episode 157 avg: 11.0 Episode 158 avg: 10.0 Episode 159 avg: 17.0 Episode 160 avg: 10.0 Episode 161 avg: 12.0 Episode 162 avg: 155.0 Episode 163 avg: 9.0 Episode 164 avg: 12.0 Episode 165 avg: 11.0 Episode 166 avg: 12.0 Episode 167 avg: 13.0 Episode 168 avg: 10.0 Episode 169 avg: 14.0 Episode 170 avg: 19.0 Episode 171 avg: 23.0 Episode 172 avg: 11.0 Episode 173 avg: 10.0 Episode 174 avg: 11.0 Episode 175 avg: 12.0 Episode 176 avg: 10.0 Episode 177 avg: 12.0 Episode 178 avg: 11.0 Episode 179 avg: 10.0 Episode 180 avg: 11.0 Episode 181 avg: 16.0 Episode 182 avg: 29.0 Episode 183 avg: 11.0 Episode 184 avg: 11.0 Episode 185 avg: 12.0 Episode 186 avg: 9.0 Episode 187 avg: 13.0 Episode 188 avg: 8.0 Episode 189 avg: 12.0 Episode 190 avg: 9.0 Episode 191 avg: 15.0 Episode 192 avg: 8.0 Episode 193 avg: 9.0 Episode 194 avg: 12.0 Episode 195 avg: 14.0 Episode 196 avg: 12.0 Episode 197 avg: 10.0 Episode 198 avg: 10.0 Episode 199 avg: 15.0 Episode 200 avg: 10.0 Episode 201 avg: 42.0 Episode 202 avg: 14.0 Episode 203 avg: 14.0 Episode 204 avg: 10.0 Episode 205 avg: 11.0 Episode 206 avg: 9.0 Episode 207 avg: 11.0 Episode 208 avg: 11.0 Episode 209 avg: 38.0 Episode 210 avg: 10.0 Episode 211 avg: 11.0 Episode 212 avg: 10.0 Episode 213 avg: 12.0 Episode 214 avg: 13.0 Episode 215 avg: 11.0 Episode 216 avg: 10.0 Episode 217 avg: 10.0 Episode 218 avg: 11.0 Episode 219 avg: 10.0 Episode 220 avg: 10.0 Episode 221 avg: 10.0 Episode 222 avg: 13.0 Episode 223 avg: 9.0 Episode 224 avg: 14.0 Episode 225 avg: 8.0 Episode 226 avg: 22.0 Episode 227 avg: 10.0 Episode 228 avg: 27.0 Episode 229 avg: 13.0 Episode 230 avg: 11.0 Episode 231 avg: 11.0 Episode 232 avg: 11.0 Episode 233 avg: 19.0 Episode 234 avg: 67.0 Episode 235 avg: 10.0 Episode 236 avg: 20.0 Episode 237 avg: 11.0 Episode 238 avg: 15.0 Episode 239 avg: 10.0 Episode 240 avg: 11.0 Episode 241 avg: 11.0 Episode 242 avg: 11.0 Episode 243 avg: 9.0 Episode 244 avg: 10.0 Episode 245 avg: 19.0 Episode 246 avg: 8.0 Episode 247 avg: 8.0 Episode 248 avg: 10.0 Episode 249 avg: 12.0 Episode 250 avg: 10.0 Episode 251 avg: 17.0 Episode 252 avg: 18.0 Episode 253 avg: 10.0 Episode 254 avg: 13.0 Episode 255 avg: 10.0 Episode 256 avg: 14.0 Episode 257 avg: 33.0 Episode 258 avg: 13.0 Episode 259 avg: 13.0 Episode 260 avg: 8.0 Episode 261 avg: 12.0 Episode 262 avg: 11.0 Episode 263 avg: 9.0 Episode 264 avg: 21.0 Episode 265 avg: 9.0 Episode 266 avg: 10.0 Episode 267 avg: 14.0 Episode 268 avg: 15.0 Episode 269 avg: 11.0 Episode 270 avg: 9.0 Episode 271 avg: 10.0 Episode 272 avg: 11.0 Episode 273 avg: 15.0 Episode 274 avg: 17.0 Episode 275 avg: 14.0 Episode 276 avg: 8.0 Episode 277 avg: 21.0 Episode 278 avg: 37.0 Episode 279 avg: 11.0 Episode 280 avg: 175.0 Episode 281 avg: 13.0 Episode 282 avg: 16.0 Episode 283 avg: 10.0 Episode 284 avg: 10.0 Episode 285 avg: 10.0 Episode 286 avg: 15.0 Episode 287 avg: 12.0 Episode 288 avg: 10.0 Episode 289 avg: 40.0 Episode 290 avg: 12.0 Episode 291 avg: 15.0 Episode 292 avg: 11.0 Episode 293 avg: 8.0 Episode 294 avg: 10.0 Episode 295 avg: 11.0 Episode 296 avg: 16.0 Episode 297 avg: 13.0 Episode 298 avg: 18.0 Episode 299 avg: 10.0 Episode 300 avg: 14.0 Episode 301 avg: 12.0 Episode 302 avg: 13.0 Episode 303 avg: 18.0 Episode 304 avg: 16.0 Episode 305 avg: 40.0 Episode 306 avg: 10.0 Episode 307 avg: 8.0 Episode 308 avg: 19.0 Episode 309 avg: 14.0 Episode 310 avg: 10.0 Episode 311 avg: 33.0 Episode 312 avg: 16.0 Episode 313 avg: 10.0 Episode 314 avg: 38.0 Episode 315 avg: 9.0 Episode 316 avg: 11.0 Episode 317 avg: 16.0 Episode 318 avg: 11.0 Episode 319 avg: 40.0 Episode 320 avg: 11.0 Episode 321 avg: 11.0 Episode 322 avg: 18.0 Episode 323 avg: 12.0 Episode 324 avg: 14.0 Episode 325 avg: 10.0 Episode 326 avg: 11.0 Episode 327 avg: 11.0 Episode 328 avg: 14.0 Episode 329 avg: 9.0 Episode 330 avg: 44.0 Episode 331 avg: 10.0 Episode 332 avg: 10.0 Episode 333 avg: 17.0 Episode 334 avg: 19.0 Episode 335 avg: 17.0 Episode 336 avg: 10.0 Episode 337 avg: 11.0 Episode 338 avg: 10.0 Episode 339 avg: 11.0 Episode 340 avg: 9.0 Episode 341 avg: 9.0 Episode 342 avg: 17.0 Episode 343 avg: 10.0 Episode 344 avg: 18.0 Episode 345 avg: 21.0 Episode 346 avg: 9.0 Episode 347 avg: 9.0 Episode 348 avg: 15.0 Episode 349 avg: 9.0 Episode 350 avg: 27.0 Episode 351 avg: 14.0 Episode 352 avg: 16.0 Episode 353 avg: 9.0 Episode 354 avg: 11.0 Episode 355 avg: 10.0 Episode 356 avg: 9.0 Episode 357 avg: 9.0 Episode 358 avg: 22.0 Episode 359 avg: 15.0 Episode 360 avg: 15.0 Episode 361 avg: 8.0 Episode 362 avg: 11.0 Episode 363 avg: 18.0 Episode 364 avg: 10.0 Episode 365 avg: 13.0 Episode 366 avg: 10.0 Episode 367 avg: 9.0 Episode 368 avg: 9.0 Episode 369 avg: 15.0 Episode 370 avg: 11.0 Episode 371 avg: 25.0 Episode 372 avg: 24.0 Episode 373 avg: 8.0 Episode 374 avg: 20.0 Episode 375 avg: 11.0 Episode 376 avg: 10.0 Episode 377 avg: 10.0 Episode 378 avg: 9.0 Episode 379 avg: 9.0 Episode 380 avg: 8.0 Episode 381 avg: 10.0 Episode 382 avg: 10.0 Episode 383 avg: 8.0 Episode 384 avg: 37.0 Episode 385 avg: 18.0 Episode 386 avg: 12.0 Episode 387 avg: 35.0 Episode 388 avg: 11.0 Episode 389 avg: 10.0 Episode 390 avg: 10.0 Episode 391 avg: 10.0 Episode 392 avg: 31.0 Episode 393 avg: 21.0 Episode 394 avg: 13.0 Episode 395 avg: 14.0 Episode 396 avg: 23.0 Episode 397 avg: 11.0 Episode 398 avg: 9.0 Episode 399 avg: 200.0 Episode 400 avg: 12.0 Episode 401 avg: 11.0 ###Markdown Testing with final learned q table values ###Code render = True q_table = np.load('./q_table_weights_final.npy') final_reward=[] while True: done = False state = get_discret_state(env.reset()) cum_reward=0 for step in range(steps_per_episode): if render : env.render() action = np.argmax(q_table[state]) new_state,reward,done,_ = env.step(action) new_state = get_discret_state(new_state) cum_reward+=reward if done: if render : env.render() if cum_reward>=200: print("Goal Reached!!",cum_reward) final_reward.append(1) else: print("Failed !! ",cum_reward) final_reward.append(0) break state = new_state env.close() if cum_reward >=200: break ![alt text]("https://github.com/isbhargav/portfolio/blob/master/images/hill_climbing.gif") ###Output fish: Unknown command: https://github.com/isbhargav/portfolio/blob/master/images/hill_climbing.gif fish: "https://github.com/isbhargav/portfolio/blob/master/images/hill_climbing.gif" ^ in command substitution fish: Unknown error while evaluating command substitution
reuter_LSTM.ipynb
###Markdown ###Code import tensorflow as tf (x_train, y_train),(x_test, y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape, y_train.shape, x_test.shape, y_test.shape print(y_train[50], x_train[50]) len(x_train[50]), len(x_train[400]), len(x_train[200]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np np.unique(y_train).shape, np.unique(y_train) ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12, activation='tanh')) # model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output _____no_output_____ ###Markdown Evaluation ###Code # 학습 시켰던 데이터 model.evaluate(pad_x_train, y_train) # loss: 2.4050 - acc: 0.3517 pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss'],'r-') plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc'],'r-') plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] import numpy as np y_pred = np.argmax(y_train_pred, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train,y_train),(x_test , y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape , y_train.shape , x_test.shape , y_test.shape print(y_train[50] , x_train[50]) len(x_train[50]),len(x_train[100]),len(x_train[500]),len(x_train[1000]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train,maxlen=500) len(pad_x_train) import numpy as np np.unique(y_train) , len(np.unique(y_train)) ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add( tf.keras.layers.Embedding(input_length=500,input_dim=10000, output_dim=24) ) # output_dim , 차원의 숫자를 뜻하며 임의로 숫자 설정할수 있음 model.add( tf.keras.layers.LSTM( 24 , return_sequences=True, activation='tanh')) # 차원의 숫자를 넣으면 됨 model.add( tf.keras.layers.LSTM( 12 , activation='tanh')) # 차원의 숫자를 넣으면 됨 , 임의의 숫자를 넣으면 됨 # model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(46,activation='softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy' , metrics=['acc']) # hist = model.fit( pad_x_train , y_train , epochs=5 , validation_split=0.3 , batch_size=128) hist = model.fit( pad_x_train , y_train , epochs=100 , validation_split=0.3 , batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 23s 760ms/step - loss: 3.7071 - acc: 0.3335 - val_loss: 3.4077 - val_acc: 0.3532 Epoch 2/100 25/25 [==============================] - 18s 735ms/step - loss: 3.2209 - acc: 0.3510 - val_loss: 2.9742 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 18s 735ms/step - loss: 2.8068 - acc: 0.3510 - val_loss: 2.6035 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 18s 727ms/step - loss: 2.5506 - acc: 0.3510 - val_loss: 2.4592 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 18s 728ms/step - loss: 2.4648 - acc: 0.3510 - val_loss: 2.4149 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 18s 729ms/step - loss: 2.4369 - acc: 0.3510 - val_loss: 2.3991 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 18s 732ms/step - loss: 2.4260 - acc: 0.3510 - val_loss: 2.3919 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 18s 729ms/step - loss: 2.4206 - acc: 0.3510 - val_loss: 2.3884 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 18s 732ms/step - loss: 2.4177 - acc: 0.3510 - val_loss: 2.3862 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 18s 732ms/step - loss: 2.4155 - acc: 0.3510 - val_loss: 2.3850 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 18s 734ms/step - loss: 2.4143 - acc: 0.3510 - val_loss: 2.3841 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 18s 734ms/step - loss: 2.4134 - acc: 0.3510 - val_loss: 2.3835 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 18s 736ms/step - loss: 2.4130 - acc: 0.3510 - val_loss: 2.3835 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 18s 731ms/step - loss: 2.4126 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 18s 737ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3830 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 19s 744ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 18s 735ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3830 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 18s 730ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 18s 738ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 18s 733ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 18s 734ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 18s 738ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 18s 735ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 18s 732ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 18s 736ms/step - loss: 2.4113 - acc: 0.3510 - val_loss: 2.3806 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 18s 729ms/step - loss: 2.4105 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 18s 733ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 19s 745ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 19s 745ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 18s 737ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 18s 734ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 18s 738ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 18s 738ms/step - loss: 2.4113 - acc: 0.3510 - val_loss: 2.3818 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 18s 742ms/step - loss: 2.4107 - acc: 0.3510 - val_loss: 2.3804 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 18s 737ms/step - loss: 2.4053 - acc: 0.3510 - val_loss: 2.3666 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 18s 734ms/step - loss: 2.3225 - acc: 0.3582 - val_loss: 2.2018 - val_acc: 0.3614 Epoch 37/100 25/25 [==============================] - 18s 735ms/step - loss: 2.1353 - acc: 0.3566 - val_loss: 2.0785 - val_acc: 0.3570 Epoch 38/100 25/25 [==============================] - 18s 735ms/step - loss: 2.0049 - acc: 0.4023 - val_loss: 2.0026 - val_acc: 0.4675 Epoch 39/100 25/25 [==============================] - 18s 734ms/step - loss: 1.8935 - acc: 0.4974 - val_loss: 1.9382 - val_acc: 0.4946 Epoch 40/100 25/25 [==============================] - 18s 734ms/step - loss: 1.7933 - acc: 0.5352 - val_loss: 1.8713 - val_acc: 0.5202 Epoch 41/100 25/25 [==============================] - 18s 731ms/step - loss: 1.7095 - acc: 0.5564 - val_loss: 1.8449 - val_acc: 0.5206 Epoch 42/100 25/25 [==============================] - 18s 732ms/step - loss: 1.6357 - acc: 0.5740 - val_loss: 1.8130 - val_acc: 0.5236 Epoch 43/100 25/25 [==============================] - 18s 732ms/step - loss: 1.5635 - acc: 0.5794 - val_loss: 1.7959 - val_acc: 0.5377 Epoch 44/100 25/25 [==============================] - 18s 735ms/step - loss: 1.5221 - acc: 0.5903 - val_loss: 1.8081 - val_acc: 0.5117 Epoch 45/100 25/25 [==============================] - 18s 742ms/step - loss: 1.4753 - acc: 0.6003 - val_loss: 1.7944 - val_acc: 0.5232 Epoch 46/100 25/25 [==============================] - 18s 730ms/step - loss: 1.4192 - acc: 0.6227 - val_loss: 1.7823 - val_acc: 0.5351 Epoch 47/100 25/25 [==============================] - 18s 731ms/step - loss: 1.3641 - acc: 0.6464 - val_loss: 1.7770 - val_acc: 0.5458 Epoch 48/100 25/25 [==============================] - 18s 736ms/step - loss: 1.3326 - acc: 0.6553 - val_loss: 1.7690 - val_acc: 0.5581 Epoch 49/100 25/25 [==============================] - 18s 733ms/step - loss: 1.3059 - acc: 0.6641 - val_loss: 1.7661 - val_acc: 0.5566 Epoch 50/100 25/25 [==============================] - 18s 733ms/step - loss: 1.2617 - acc: 0.6736 - val_loss: 1.7727 - val_acc: 0.5521 Epoch 51/100 25/25 [==============================] - 18s 739ms/step - loss: 1.2334 - acc: 0.6830 - val_loss: 1.8205 - val_acc: 0.5362 Epoch 52/100 25/25 [==============================] - 18s 738ms/step - loss: 1.2489 - acc: 0.6609 - val_loss: 1.8212 - val_acc: 0.5388 Epoch 53/100 25/25 [==============================] - 18s 734ms/step - loss: 1.1978 - acc: 0.6903 - val_loss: 1.7972 - val_acc: 0.5573 Epoch 54/100 25/25 [==============================] - 18s 734ms/step - loss: 1.1634 - acc: 0.6967 - val_loss: 1.8209 - val_acc: 0.5403 Epoch 55/100 25/25 [==============================] - 18s 738ms/step - loss: 1.1383 - acc: 0.6991 - val_loss: 1.8117 - val_acc: 0.5518 Epoch 56/100 25/25 [==============================] - 18s 739ms/step - loss: 1.1334 - acc: 0.7011 - val_loss: 1.8290 - val_acc: 0.5514 Epoch 57/100 25/25 [==============================] - 18s 730ms/step - loss: 1.1643 - acc: 0.6879 - val_loss: 1.8339 - val_acc: 0.5506 Epoch 58/100 25/25 [==============================] - 18s 734ms/step - loss: 1.1045 - acc: 0.7059 - val_loss: 1.8434 - val_acc: 0.5484 Epoch 59/100 25/25 [==============================] - 18s 730ms/step - loss: 1.0894 - acc: 0.7011 - val_loss: 1.8156 - val_acc: 0.5532 Epoch 60/100 25/25 [==============================] - 18s 733ms/step - loss: 1.0564 - acc: 0.7189 - val_loss: 1.8418 - val_acc: 0.5514 Epoch 61/100 25/25 [==============================] - 18s 733ms/step - loss: 1.0429 - acc: 0.7207 - val_loss: 1.8569 - val_acc: 0.5436 Epoch 62/100 25/25 [==============================] - 18s 727ms/step - loss: 1.0322 - acc: 0.7286 - val_loss: 1.8626 - val_acc: 0.5458 Epoch 63/100 25/25 [==============================] - 18s 727ms/step - loss: 1.0082 - acc: 0.7314 - val_loss: 1.8528 - val_acc: 0.5558 Epoch 64/100 25/25 [==============================] - 18s 731ms/step - loss: 1.0077 - acc: 0.7317 - val_loss: 1.9516 - val_acc: 0.5184 Epoch 65/100 25/25 [==============================] - 18s 733ms/step - loss: 1.0117 - acc: 0.7344 - val_loss: 1.8715 - val_acc: 0.5521 Epoch 66/100 25/25 [==============================] - 18s 728ms/step - loss: 0.9765 - acc: 0.7439 - val_loss: 1.8724 - val_acc: 0.5525 Epoch 67/100 25/25 [==============================] - 18s 729ms/step - loss: 0.9724 - acc: 0.7455 - val_loss: 1.8824 - val_acc: 0.5577 Epoch 68/100 25/25 [==============================] - 18s 735ms/step - loss: 0.9653 - acc: 0.7460 - val_loss: 1.8827 - val_acc: 0.5573 Epoch 69/100 25/25 [==============================] - 18s 735ms/step - loss: 0.9405 - acc: 0.7587 - val_loss: 1.8950 - val_acc: 0.5558 Epoch 70/100 25/25 [==============================] - 18s 731ms/step - loss: 0.9299 - acc: 0.7560 - val_loss: 1.9082 - val_acc: 0.5510 Epoch 71/100 25/25 [==============================] - 18s 731ms/step - loss: 0.9154 - acc: 0.7644 - val_loss: 1.9026 - val_acc: 0.5551 Epoch 72/100 25/25 [==============================] - 18s 732ms/step - loss: 0.8991 - acc: 0.7743 - val_loss: 1.9316 - val_acc: 0.5466 Epoch 73/100 25/25 [==============================] - 18s 732ms/step - loss: 0.8859 - acc: 0.7716 - val_loss: 1.9283 - val_acc: 0.5462 Epoch 74/100 25/25 [==============================] - 18s 730ms/step - loss: 0.8769 - acc: 0.7830 - val_loss: 1.9313 - val_acc: 0.5525 Epoch 75/100 25/25 [==============================] - 18s 736ms/step - loss: 0.8662 - acc: 0.7837 - val_loss: 1.9420 - val_acc: 0.5518 Epoch 76/100 25/25 [==============================] - 18s 737ms/step - loss: 0.8523 - acc: 0.7929 - val_loss: 1.9365 - val_acc: 0.5555 Epoch 77/100 25/25 [==============================] - 18s 735ms/step - loss: 0.8403 - acc: 0.7932 - val_loss: 1.9594 - val_acc: 0.5469 Epoch 78/100 25/25 [==============================] - 18s 732ms/step - loss: 0.8288 - acc: 0.8002 - val_loss: 1.9396 - val_acc: 0.5599 Epoch 79/100 25/25 [==============================] - 18s 731ms/step - loss: 0.8198 - acc: 0.8010 - val_loss: 1.9525 - val_acc: 0.5622 Epoch 80/100 25/25 [==============================] - 18s 732ms/step - loss: 0.8170 - acc: 0.8004 - val_loss: 1.9627 - val_acc: 0.5551 Epoch 81/100 25/25 [==============================] - 18s 734ms/step - loss: 0.8036 - acc: 0.8024 - val_loss: 1.9593 - val_acc: 0.5618 Epoch 82/100 25/25 [==============================] - 18s 730ms/step - loss: 0.7995 - acc: 0.8056 - val_loss: 1.9736 - val_acc: 0.5577 Epoch 83/100 25/25 [==============================] - 18s 727ms/step - loss: 0.7850 - acc: 0.8141 - val_loss: 1.9767 - val_acc: 0.5562 Epoch 84/100 25/25 [==============================] - 18s 729ms/step - loss: 0.7772 - acc: 0.8144 - val_loss: 1.9993 - val_acc: 0.5510 Epoch 85/100 25/25 [==============================] - 18s 732ms/step - loss: 0.7878 - acc: 0.8042 - val_loss: 1.9886 - val_acc: 0.5584 Epoch 86/100 25/25 [==============================] - 18s 735ms/step - loss: 0.7892 - acc: 0.8071 - val_loss: 1.9991 - val_acc: 0.5525 Epoch 87/100 25/25 [==============================] - 18s 733ms/step - loss: 0.7934 - acc: 0.8024 - val_loss: 1.9898 - val_acc: 0.5495 Epoch 88/100 25/25 [==============================] - 18s 734ms/step - loss: 0.7975 - acc: 0.8012 - val_loss: 1.9630 - val_acc: 0.5792 Epoch 89/100 25/25 [==============================] - 18s 729ms/step - loss: 0.7708 - acc: 0.8099 - val_loss: 1.9608 - val_acc: 0.5703 Epoch 90/100 25/25 [==============================] - 18s 729ms/step - loss: 0.7767 - acc: 0.8055 - val_loss: 1.9455 - val_acc: 0.5807 Epoch 91/100 25/25 [==============================] - 18s 732ms/step - loss: 0.7452 - acc: 0.8185 - val_loss: 2.0236 - val_acc: 0.5573 Epoch 92/100 25/25 [==============================] - 18s 736ms/step - loss: 0.7316 - acc: 0.8220 - val_loss: 1.9620 - val_acc: 0.5774 Epoch 93/100 25/25 [==============================] - 18s 734ms/step - loss: 0.7167 - acc: 0.8295 - val_loss: 1.9868 - val_acc: 0.5766 Epoch 94/100 25/25 [==============================] - 18s 733ms/step - loss: 0.7047 - acc: 0.8306 - val_loss: 2.0087 - val_acc: 0.5647 Epoch 95/100 25/25 [==============================] - 18s 726ms/step - loss: 0.6921 - acc: 0.8360 - val_loss: 1.9860 - val_acc: 0.5759 Epoch 96/100 25/25 [==============================] - 18s 728ms/step - loss: 0.6785 - acc: 0.8379 - val_loss: 2.0026 - val_acc: 0.5696 Epoch 97/100 25/25 [==============================] - 18s 733ms/step - loss: 0.6652 - acc: 0.8378 - val_loss: 1.9960 - val_acc: 0.5744 Epoch 98/100 25/25 [==============================] - 18s 731ms/step - loss: 0.6559 - acc: 0.8518 - val_loss: 2.0038 - val_acc: 0.5740 Epoch 99/100 25/25 [==============================] - 18s 732ms/step - loss: 0.6435 - acc: 0.8546 - val_loss: 2.0010 - val_acc: 0.5733 Epoch 100/100 25/25 [==============================] - 18s 730ms/step - loss: 0.6376 - acc: 0.8564 - val_loss: 2.0140 - val_acc: 0.5755 ###Markdown Evaluation ###Code model.evaluate( pad_x_train, y_train) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train,maxlen=500) pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test,maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data,maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x,y_test) model.evaluate( pad_x_test , y_test) import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] import numpy as np y_pred = np.argmax(y_train_pred,axis=1) y_pred.shape print(classification_report(y_train,y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred,axis=1) print(classification_report(y_test,y_pred)) ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train,y_train),(x_test,y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape,y_train.shape,x_test.shape,y_test.shape print(y_train[50], x_train[50]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np np.unique(y_train).shape, np.unique(y_train) ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer # model.add(tf.keras.layers.Flatten()) # hidden model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) # hidden , return_sequences=True, activation='tanh' defalut 값이라 안넣어줘도 됨 model.add(tf.keras.layers.LSTM(12, activation='tanh')) # hidden model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget # model.fit(pad_x_train, y_train) # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 20s 693ms/step - loss: 3.7307 - acc: 0.3302 - val_loss: 3.4548 - val_acc: 0.3532 Epoch 2/100 25/25 [==============================] - 16s 659ms/step - loss: 3.2141 - acc: 0.3510 - val_loss: 2.9482 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 16s 661ms/step - loss: 2.7955 - acc: 0.3510 - val_loss: 2.6137 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 16s 663ms/step - loss: 2.5623 - acc: 0.3510 - val_loss: 2.4733 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4760 - acc: 0.3510 - val_loss: 2.4252 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 16s 662ms/step - loss: 2.4441 - acc: 0.3510 - val_loss: 2.4057 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 17s 667ms/step - loss: 2.4300 - acc: 0.3510 - val_loss: 2.3962 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 17s 663ms/step - loss: 2.4228 - acc: 0.3510 - val_loss: 2.3911 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 16s 662ms/step - loss: 2.4191 - acc: 0.3510 - val_loss: 2.3884 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 16s 661ms/step - loss: 2.4168 - acc: 0.3510 - val_loss: 2.3869 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 17s 666ms/step - loss: 2.4155 - acc: 0.3510 - val_loss: 2.3855 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 16s 660ms/step - loss: 2.4145 - acc: 0.3510 - val_loss: 2.3850 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4139 - acc: 0.3510 - val_loss: 2.3847 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 16s 661ms/step - loss: 2.4131 - acc: 0.3510 - val_loss: 2.3839 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 17s 669ms/step - loss: 2.4129 - acc: 0.3510 - val_loss: 2.3835 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4129 - acc: 0.3510 - val_loss: 2.3833 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 17s 665ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3831 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 16s 662ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3830 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 16s 659ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3830 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 16s 661ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 16s 660ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 17s 668ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3830 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 16s 662ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 16s 660ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 17s 666ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 16s 662ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 17s 663ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 16s 661ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3829 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 17s 662ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 17s 668ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 16s 661ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 17s 663ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 17s 666ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 17s 665ms/step - loss: 2.4088 - acc: 0.3510 - val_loss: 2.3739 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 17s 664ms/step - loss: 2.4108 - acc: 0.3510 - val_loss: 2.3788 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 17s 663ms/step - loss: 2.3196 - acc: 0.3536 - val_loss: 2.2430 - val_acc: 0.3558 Epoch 40/100 25/25 [==============================] - 17s 671ms/step - loss: 2.1767 - acc: 0.3539 - val_loss: 2.1014 - val_acc: 0.3555 Epoch 41/100 25/25 [==============================] - 17s 664ms/step - loss: 2.0438 - acc: 0.3633 - val_loss: 2.0353 - val_acc: 0.3659 Epoch 42/100 25/25 [==============================] - 17s 663ms/step - loss: 1.9519 - acc: 0.3843 - val_loss: 1.9789 - val_acc: 0.4234 Epoch 43/100 25/25 [==============================] - 17s 665ms/step - loss: 1.8627 - acc: 0.4781 - val_loss: 1.9385 - val_acc: 0.4560 Epoch 44/100 25/25 [==============================] - 17s 669ms/step - loss: 1.7719 - acc: 0.5383 - val_loss: 1.8909 - val_acc: 0.5095 Epoch 45/100 25/25 [==============================] - 17s 670ms/step - loss: 1.6881 - acc: 0.5769 - val_loss: 1.8522 - val_acc: 0.5232 Epoch 46/100 25/25 [==============================] - 17s 670ms/step - loss: 1.6130 - acc: 0.5917 - val_loss: 1.8175 - val_acc: 0.5406 Epoch 47/100 25/25 [==============================] - 17s 675ms/step - loss: 1.5574 - acc: 0.6186 - val_loss: 1.7861 - val_acc: 0.5469 Epoch 48/100 25/25 [==============================] - 17s 672ms/step - loss: 1.4954 - acc: 0.6342 - val_loss: 1.8306 - val_acc: 0.5236 Epoch 49/100 25/25 [==============================] - 17s 674ms/step - loss: 1.4352 - acc: 0.6498 - val_loss: 1.7646 - val_acc: 0.5532 Epoch 50/100 25/25 [==============================] - 17s 673ms/step - loss: 1.3588 - acc: 0.6728 - val_loss: 1.7662 - val_acc: 0.5610 Epoch 51/100 25/25 [==============================] - 17s 672ms/step - loss: 1.3046 - acc: 0.6884 - val_loss: 1.7772 - val_acc: 0.5321 Epoch 52/100 25/25 [==============================] - 17s 665ms/step - loss: 1.3024 - acc: 0.6785 - val_loss: 1.7612 - val_acc: 0.5536 Epoch 53/100 25/25 [==============================] - 17s 665ms/step - loss: 1.2291 - acc: 0.7008 - val_loss: 1.7537 - val_acc: 0.5699 Epoch 54/100 25/25 [==============================] - 17s 667ms/step - loss: 1.1786 - acc: 0.7202 - val_loss: 1.7715 - val_acc: 0.5625 Epoch 55/100 25/25 [==============================] - 17s 671ms/step - loss: 1.1370 - acc: 0.7258 - val_loss: 1.7781 - val_acc: 0.5607 Epoch 56/100 25/25 [==============================] - 17s 663ms/step - loss: 1.1087 - acc: 0.7320 - val_loss: 1.7656 - val_acc: 0.5718 Epoch 57/100 25/25 [==============================] - 17s 663ms/step - loss: 1.0696 - acc: 0.7396 - val_loss: 1.7827 - val_acc: 0.5714 Epoch 58/100 25/25 [==============================] - 17s 669ms/step - loss: 1.0347 - acc: 0.7488 - val_loss: 1.7957 - val_acc: 0.5659 Epoch 59/100 25/25 [==============================] - 17s 667ms/step - loss: 1.0130 - acc: 0.7481 - val_loss: 1.7948 - val_acc: 0.5677 Epoch 60/100 25/25 [==============================] - 17s 669ms/step - loss: 0.9932 - acc: 0.7539 - val_loss: 1.8035 - val_acc: 0.5740 Epoch 61/100 25/25 [==============================] - 17s 666ms/step - loss: 0.9686 - acc: 0.7597 - val_loss: 1.8080 - val_acc: 0.5733 Epoch 62/100 25/25 [==============================] - 17s 665ms/step - loss: 0.9611 - acc: 0.7625 - val_loss: 1.8097 - val_acc: 0.5763 Epoch 63/100 25/25 [==============================] - 17s 666ms/step - loss: 0.9494 - acc: 0.7651 - val_loss: 1.8637 - val_acc: 0.5562 Epoch 64/100 25/25 [==============================] - 17s 666ms/step - loss: 0.9258 - acc: 0.7692 - val_loss: 1.8259 - val_acc: 0.5740 Epoch 65/100 25/25 [==============================] - 17s 668ms/step - loss: 0.9088 - acc: 0.7789 - val_loss: 1.8496 - val_acc: 0.5729 Epoch 66/100 25/25 [==============================] - 17s 666ms/step - loss: 0.8804 - acc: 0.7853 - val_loss: 1.8458 - val_acc: 0.5751 Epoch 67/100 25/25 [==============================] - 17s 665ms/step - loss: 0.8587 - acc: 0.7950 - val_loss: 1.8367 - val_acc: 0.5755 Epoch 68/100 25/25 [==============================] - 17s 665ms/step - loss: 0.8417 - acc: 0.8007 - val_loss: 1.8456 - val_acc: 0.5788 Epoch 69/100 25/25 [==============================] - 17s 667ms/step - loss: 0.8257 - acc: 0.8053 - val_loss: 1.8579 - val_acc: 0.5763 Epoch 70/100 25/25 [==============================] - 17s 665ms/step - loss: 0.8080 - acc: 0.8094 - val_loss: 1.8650 - val_acc: 0.5785 Epoch 71/100 25/25 [==============================] - 17s 666ms/step - loss: 0.7904 - acc: 0.8150 - val_loss: 1.8616 - val_acc: 0.5814 Epoch 72/100 25/25 [==============================] - 17s 664ms/step - loss: 0.7766 - acc: 0.8168 - val_loss: 1.8660 - val_acc: 0.5744 Epoch 73/100 25/25 [==============================] - 17s 665ms/step - loss: 0.7645 - acc: 0.8220 - val_loss: 1.8852 - val_acc: 0.5770 Epoch 74/100 25/25 [==============================] - 17s 664ms/step - loss: 0.7555 - acc: 0.8242 - val_loss: 1.8949 - val_acc: 0.5826 Epoch 75/100 25/25 [==============================] - 17s 665ms/step - loss: 0.7489 - acc: 0.8271 - val_loss: 1.8924 - val_acc: 0.5844 Epoch 76/100 25/25 [==============================] - 17s 670ms/step - loss: 0.7427 - acc: 0.8265 - val_loss: 1.9186 - val_acc: 0.5788 Epoch 77/100 25/25 [==============================] - 17s 672ms/step - loss: 0.7330 - acc: 0.8293 - val_loss: 1.9112 - val_acc: 0.5751 Epoch 78/100 25/25 [==============================] - 17s 672ms/step - loss: 0.7196 - acc: 0.8312 - val_loss: 1.9126 - val_acc: 0.5781 Epoch 79/100 25/25 [==============================] - 17s 670ms/step - loss: 0.7074 - acc: 0.8368 - val_loss: 1.9179 - val_acc: 0.5792 Epoch 80/100 25/25 [==============================] - 17s 669ms/step - loss: 0.7047 - acc: 0.8366 - val_loss: 1.9153 - val_acc: 0.5826 Epoch 81/100 25/25 [==============================] - 17s 670ms/step - loss: 0.7046 - acc: 0.8362 - val_loss: 1.9144 - val_acc: 0.5870 Epoch 82/100 25/25 [==============================] - 17s 665ms/step - loss: 0.6938 - acc: 0.8386 - val_loss: 1.9669 - val_acc: 0.5707 Epoch 83/100 25/25 [==============================] - 17s 664ms/step - loss: 0.6891 - acc: 0.8417 - val_loss: 1.9217 - val_acc: 0.5833 Epoch 84/100 25/25 [==============================] - 17s 666ms/step - loss: 0.6844 - acc: 0.8409 - val_loss: 1.9427 - val_acc: 0.5826 Epoch 85/100 25/25 [==============================] - 17s 670ms/step - loss: 0.6740 - acc: 0.8433 - val_loss: 1.9515 - val_acc: 0.5811 Epoch 86/100 25/25 [==============================] - 17s 665ms/step - loss: 0.6543 - acc: 0.8498 - val_loss: 1.9390 - val_acc: 0.5889 Epoch 87/100 25/25 [==============================] - 17s 671ms/step - loss: 0.6489 - acc: 0.8502 - val_loss: 1.9651 - val_acc: 0.5777 Epoch 88/100 25/25 [==============================] - 17s 671ms/step - loss: 0.6387 - acc: 0.8549 - val_loss: 1.9552 - val_acc: 0.5807 Epoch 89/100 25/25 [==============================] - 17s 670ms/step - loss: 0.6276 - acc: 0.8578 - val_loss: 1.9387 - val_acc: 0.5874 Epoch 90/100 25/25 [==============================] - 17s 663ms/step - loss: 0.6199 - acc: 0.8610 - val_loss: 1.9675 - val_acc: 0.5807 Epoch 91/100 25/25 [==============================] - 17s 663ms/step - loss: 0.6118 - acc: 0.8650 - val_loss: 1.9553 - val_acc: 0.5874 Epoch 92/100 25/25 [==============================] - 17s 668ms/step - loss: 0.6017 - acc: 0.8648 - val_loss: 1.9649 - val_acc: 0.5881 Epoch 93/100 25/25 [==============================] - 17s 665ms/step - loss: 0.6033 - acc: 0.8675 - val_loss: 1.9725 - val_acc: 0.5855 Epoch 94/100 25/25 [==============================] - 17s 668ms/step - loss: 0.6525 - acc: 0.8513 - val_loss: 2.0153 - val_acc: 0.5770 Epoch 95/100 25/25 [==============================] - 17s 668ms/step - loss: 0.6314 - acc: 0.8529 - val_loss: 1.9722 - val_acc: 0.5889 Epoch 96/100 25/25 [==============================] - 17s 667ms/step - loss: 0.6027 - acc: 0.8615 - val_loss: 1.9776 - val_acc: 0.5878 Epoch 97/100 25/25 [==============================] - 17s 668ms/step - loss: 0.5889 - acc: 0.8654 - val_loss: 1.9771 - val_acc: 0.5881 Epoch 98/100 25/25 [==============================] - 17s 670ms/step - loss: 0.5762 - acc: 0.8728 - val_loss: 2.0008 - val_acc: 0.5826 Epoch 99/100 25/25 [==============================] - 17s 666ms/step - loss: 0.5855 - acc: 0.8646 - val_loss: 2.0045 - val_acc: 0.5889 Epoch 100/100 25/25 [==============================] - 17s 664ms/step - loss: 0.5666 - acc: 0.8726 - val_loss: 1.9869 - val_acc: 0.5929 ###Markdown Evaluation ###Code # 학습시켰던 데이터 model.evaluate(pad_x_train,y_train) # - loss: 2.4055 - acc: 0.3517 ###Output 281/281 [==============================] - 17s 61ms/step - loss: 0.9876 - acc: 0.7892 ###Markdown 전처리 ###Code pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() from sklearn.metrics import classification_report y_train_predict = model.predict(pad_x_train) y_train_predict[0] import numpy as np y_pred = np.argmax(y_train_predict, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train, y_train), (x_test, y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape, y_train.shape, x_test.shape, y_test.shape, print(y_train[5], x_train[50]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np np.unique(y_train) len(np.unique(y_train)) ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=42)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12, activation='tanh')) model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') #gadget hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output _____no_output_____ ###Markdown evaluation ###Code hist.history.keys() # 학습 시켰던 데이터 model.evaluate(pad_x_train, y_train) pad_x_train= tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) pad_x_test= tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x= tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) pad_make(x_test) model.evaluate(pad_x_test, y_test) import matplotlib.pyplot as plt hist.history['loss'] plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() ###Output _____no_output_____ ###Markdown Reuter LSTM : 가장 멀리있는 곳 위치를 기억한다Ags : 아규스 , 파라메터 Datasets ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown * Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test)결과: ((8982,), (8982,), (2246,), (2246,)) ###Code (x_train, y_train),(x_test,y_test)= tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape, y_train.shape, x_test.shape, y_test.shape ###Output /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/datasets/reuters.py:143: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/datasets/reuters.py:144: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:]) ###Markdown * 50번째(특정부분)에 무엇이 있는지 확인하기결과: 4 [1, 1479, 1197, ...]단어 기준으로 언어를 숫자로 채워준다 ###Code print(y_train[50], x_train[50]) ###Output 4 [1, 1479, 1197, 71, 8, 25, 1479, 1197, 640, 71, 304, 471, 80, 9, 1379, 1901, 4530, 6797, 79, 5, 8144, 71, 175, 80, 58, 4, 1279, 5, 63, 32, 20, 5, 4, 326, 175, 80, 335, 7, 10, 845, 31, 4, 221, 9, 108, 259, 1479, 1197, 640, 8, 16, 600, 69, 68, 11, 15, 6, 8144, 21, 397, 321, 6, 438, 1761, 3072, 79, 5, 8144, 1040, 894, 1051, 617, 80, 4, 617, 80, 23, 1051, 172, 3814, 3206, 8144, 175, 79, 9, 1379, 6, 264, 395, 3814, 3206, 79, 1479, 1197, 9, 25, 323, 8, 4, 8144, 80, 23, 381, 43, 42, 205, 50, 77, 33, 909, 9, 3509, 22, 216, 6, 216, 17, 12] ###Markdown * input 넣으려니 datasize가 동일한지 컬럼의 갯수 확인하기결과: (118, 90, 212)사이즈가 다 다르네? -> 강제적으로 맞춰준다 -> 숫자일때 계산에 영향을 주지 않는 값 0 으로 패딩해준다 ###Code len(x_train[50]), len(x_train[400]), len(x_train[200]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) # pad_x_train[50] import numpy as np # len(np.unique(y_train)) np.unique(y_train).shape ###Output _____no_output_____ ###Markdown Make model ###Code model = tf.keras.models.Sequential() ###Output _____no_output_____ ###Markdown * Input layer모든 (컬럼)사이즈, 데이터 사이즈는 동일해야 한다```Embedding( input_length=500, 인풋 데이터 사이즈 = 500 input_dim= 10000, 사전의 사이즈 정해줌 (차원의 기준) output_dim= 24, 백터 24차원으로 바꿈(연관관계있도록 만들어줌) ) ``` ###Code model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000 ,output_dim=24)) ###Output _____no_output_____ ###Markdown * Hidden layer데이터를 플랫하게 만들어준다```model.add(tf.keras.layers.Flatten())``` tf.keras.layers.LSTM( units, activation='tanh',return_sequences=False)같은 layer에서 중간쯤 가면 맨 처음에 무슨단어를 넣었는지 까먹는다. 시간이 가도 처음에 뭐 넣엇는지 기억하게 만들자. 그러면 성능이 더 좋아지지 않겠느냐 해서 만들어짐(플랫 대신해서 사용)적용:1.레이어 바로 앞 인배딩이 아웃풋 하는 사이즈와 동일해야 한다.(인배딩과 연결되는부분)2.레이어 완전 연결방식이기 때문에 두번째 부턴 숫자 달라져도 된다. ###Code model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12,activation='tanh')) ###Output _____no_output_____ ###Markdown * Output layernp.unique(y_train) 유니크한 컬럼의 갯수의 결과 : (46,)Dense는 유니크한 컬럼의 갯수가 들어간다.activation은 dense가 3개 이상 이어서 softmax 사용 ###Code model.add(tf.keras.layers.Dense(46,activation='softmax')) ###Output _____no_output_____ ###Markdown * Gadgetoptimizer = adam 학습 잘 할 수 있게 기준을 만들어줌loss = sparse_categorical_crossemtropy 수치(원핫인코딩 안하려면 이거씀-분류형인지도 알려줌) ###Code model.compile(optimizer='adam' , loss='sparse_categorical_crossentropy', metrics=['acc']) ###Output _____no_output_____ ###Markdown Train 기존: 한 번 에폭에 55초 걸렷고, loss는 2.5907model.fit(pad_x_train, y_train)히든레이어에 LSTM 레이어 하나더 추가: 다섯번 에폭에 26초 걸렸고, loss는 2.4120 (에폭 최소 30은 돌려야한다)model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3 , batch_size=128) ###Code hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3 , batch_size=128) ###Output Epoch 1/5 50/50 [==============================] - 26s 524ms/step - loss: 2.4129 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 2/5 50/50 [==============================] - 26s 522ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 3/5 50/50 [==============================] - 26s 524ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3818 - val_acc: 0.3532 Epoch 4/5 50/50 [==============================] - 26s 521ms/step - loss: 2.4125 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 5/5 50/50 [==============================] - 26s 522ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 ###Markdown Evaluation* x의 키워드를 만들고 y의 분류에 짝을 지어 준다. 1. 학습을 시켰던 데이터기존: loss: 2.4045LSTM 하나 더 추가, 에폭 5: loss: 2.4023 ###Code model.evaluate(pad_x_train, y_train) ###Output 281/281 [==============================] - 17s 62ms/step - loss: 2.4023 - acc: 0.3517 ###Markdown 2. 학습 안시킨 데이터 전처리 ###Code pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) model.evaluate(pad_x_test, y_test) ###Output 71/71 [==============================] - 4s 63ms/step - loss: 2.4147 - acc: 0.3620 ###Markdown * 1.과 2. 비교 1. 17s 61ms/step - loss: 2.4023 2. 4s 61ms/step - loss: 2.4147 loss가 비슷하니 학습이 잘 됬구나 알 수 있다. Plot ###Code import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train, y_train),(x_test, y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape, y_train.shape, x_test.shape, y_test.shape print(y_train[50], x_train[50]) len(x_train[50]),len(x_train[400]),len(x_train[200]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np np.unique(y_train).shape ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=24, input_length=500)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12, activation='tanh')) #model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget #hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 6s 242ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 2/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 6s 241ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 6s 225ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 6s 227ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 6s 226ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 6s 236ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 40/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 41/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 42/100 25/25 [==============================] - 6s 231ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 43/100 25/25 [==============================] - 6s 237ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 44/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 45/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 46/100 25/25 [==============================] - 6s 249ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 47/100 25/25 [==============================] - 6s 237ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 48/100 25/25 [==============================] - 6s 236ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 49/100 25/25 [==============================] - 6s 239ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 50/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 51/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 52/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 53/100 25/25 [==============================] - 6s 233ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 54/100 25/25 [==============================] - 6s 236ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 55/100 25/25 [==============================] - 6s 233ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 56/100 25/25 [==============================] - 6s 240ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 57/100 25/25 [==============================] - 6s 237ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 58/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 59/100 25/25 [==============================] - 6s 236ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 60/100 25/25 [==============================] - 6s 234ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 61/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 62/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 63/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 64/100 25/25 [==============================] - 6s 237ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 65/100 25/25 [==============================] - 6s 234ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 66/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 67/100 25/25 [==============================] - 6s 235ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 68/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 69/100 25/25 [==============================] - 6s 231ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 70/100 25/25 [==============================] - 6s 237ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 71/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 72/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 73/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 74/100 25/25 [==============================] - 6s 234ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 75/100 25/25 [==============================] - 6s 231ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 76/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 77/100 25/25 [==============================] - 6s 244ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 78/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 79/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 80/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 81/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 82/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 83/100 25/25 [==============================] - 6s 233ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 84/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 85/100 25/25 [==============================] - 6s 232ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 86/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 87/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 88/100 25/25 [==============================] - 6s 236ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 89/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 90/100 25/25 [==============================] - 6s 231ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 91/100 25/25 [==============================] - 6s 231ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 92/100 25/25 [==============================] - 6s 229ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 93/100 25/25 [==============================] - 6s 228ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 94/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 95/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 96/100 25/25 [==============================] - 6s 234ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 97/100 25/25 [==============================] - 6s 234ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 98/100 25/25 [==============================] - 6s 235ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 99/100 25/25 [==============================] - 6s 233ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 100/100 25/25 [==============================] - 6s 230ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 ###Markdown Evaluation ###Code # 학습 시켰던 데이터 model.evaluate(pad_x_train, y_train) ###Output 281/281 [==============================] - 13s 46ms/step - loss: 2.4024 - acc: 0.3517 ###Markdown TEST 전처리 및 function만들기 ###Code pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc'],'r-') plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] import numpy as np y_pred= np.argmax(y_train_pred, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output _____no_output_____ ###Markdown Make Model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12, activation='tanh')) # model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(46, activation='softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) ###Output Epoch 1/100 25/25 [==============================] - 22s 752ms/step - loss: 3.7479 - acc: 0.3057 - val_loss: 3.5032 - val_acc: 0.3532 Epoch 2/100 25/25 [==============================] - 18s 710ms/step - loss: 3.2212 - acc: 0.3510 - val_loss: 2.9667 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 18s 714ms/step - loss: 2.8237 - acc: 0.3510 - val_loss: 2.6428 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 18s 714ms/step - loss: 2.5842 - acc: 0.3510 - val_loss: 2.4855 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 18s 713ms/step - loss: 2.4826 - acc: 0.3510 - val_loss: 2.4252 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 18s 715ms/step - loss: 2.4445 - acc: 0.3510 - val_loss: 2.4033 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 18s 718ms/step - loss: 2.4293 - acc: 0.3510 - val_loss: 2.3929 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 18s 715ms/step - loss: 2.4225 - acc: 0.3510 - val_loss: 2.3881 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 18s 714ms/step - loss: 2.4187 - acc: 0.3510 - val_loss: 2.3861 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 18s 714ms/step - loss: 2.4164 - acc: 0.3510 - val_loss: 2.3846 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 18s 713ms/step - loss: 2.4150 - acc: 0.3510 - val_loss: 2.3833 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 18s 715ms/step - loss: 2.4140 - acc: 0.3510 - val_loss: 2.3832 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 18s 712ms/step - loss: 2.4135 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 18s 717ms/step - loss: 2.4130 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 18s 715ms/step - loss: 2.4127 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 18s 712ms/step - loss: 2.4124 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 18s 711ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 18s 716ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 18s 718ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 18s 734ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 18s 730ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 18s 721ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 18s 726ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 18s 729ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 18s 722ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 18s 720ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 18s 722ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 18s 722ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3817 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 18s 736ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 18s 722ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 18s 719ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 18s 721ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 18s 728ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 18s 730ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 18s 732ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 18s 738ms/step - loss: 2.4113 - acc: 0.3510 - val_loss: 2.3817 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 18s 730ms/step - loss: 2.4091 - acc: 0.3510 - val_loss: 2.3732 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 18s 722ms/step - loss: 2.3612 - acc: 0.3510 - val_loss: 2.3282 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 18s 722ms/step - loss: 2.2827 - acc: 0.3510 - val_loss: 2.1956 - val_acc: 0.3532 Epoch 40/100 25/25 [==============================] - 18s 720ms/step - loss: 2.2130 - acc: 0.3510 - val_loss: 2.2277 - val_acc: 0.3532 Epoch 41/100 25/25 [==============================] - 18s 718ms/step - loss: 2.2020 - acc: 0.3510 - val_loss: 2.1446 - val_acc: 0.3532 Epoch 42/100 25/25 [==============================] - 18s 724ms/step - loss: 2.1268 - acc: 0.3510 - val_loss: 2.0898 - val_acc: 0.3532 Epoch 43/100 25/25 [==============================] - 18s 719ms/step - loss: 2.0622 - acc: 0.3510 - val_loss: 2.0505 - val_acc: 0.3532 Epoch 44/100 25/25 [==============================] - 18s 716ms/step - loss: 2.0155 - acc: 0.3803 - val_loss: 2.0230 - val_acc: 0.3788 Epoch 45/100 25/25 [==============================] - 18s 720ms/step - loss: 1.9963 - acc: 0.3940 - val_loss: 2.0211 - val_acc: 0.3781 Epoch 46/100 25/25 [==============================] - 18s 721ms/step - loss: 1.9671 - acc: 0.3926 - val_loss: 1.9971 - val_acc: 0.3918 Epoch 47/100 25/25 [==============================] - 18s 716ms/step - loss: 1.9323 - acc: 0.4031 - val_loss: 1.9813 - val_acc: 0.3915 Epoch 48/100 25/25 [==============================] - 18s 715ms/step - loss: 1.9144 - acc: 0.4040 - val_loss: 1.9799 - val_acc: 0.3915 Epoch 49/100 25/25 [==============================] - 18s 720ms/step - loss: 1.8999 - acc: 0.4035 - val_loss: 1.9695 - val_acc: 0.3929 Epoch 50/100 25/25 [==============================] - 18s 723ms/step - loss: 1.8832 - acc: 0.4050 - val_loss: 1.9677 - val_acc: 0.3933 Epoch 51/100 25/25 [==============================] - 18s 718ms/step - loss: 1.8632 - acc: 0.4048 - val_loss: 1.9364 - val_acc: 0.3941 Epoch 52/100 25/25 [==============================] - 18s 724ms/step - loss: 1.8258 - acc: 0.4046 - val_loss: 1.9351 - val_acc: 0.3959 Epoch 53/100 25/25 [==============================] - 18s 731ms/step - loss: 1.7837 - acc: 0.4107 - val_loss: 1.9214 - val_acc: 0.4030 Epoch 54/100 25/25 [==============================] - 18s 734ms/step - loss: 1.7548 - acc: 0.4158 - val_loss: 1.9618 - val_acc: 0.3955 Epoch 55/100 25/25 [==============================] - 18s 721ms/step - loss: 1.7298 - acc: 0.4191 - val_loss: 1.9135 - val_acc: 0.4037 Epoch 56/100 25/25 [==============================] - 18s 719ms/step - loss: 1.6924 - acc: 0.4285 - val_loss: 1.9043 - val_acc: 0.4078 Epoch 57/100 25/25 [==============================] - 18s 721ms/step - loss: 1.6778 - acc: 0.4291 - val_loss: 1.9210 - val_acc: 0.3993 Epoch 58/100 25/25 [==============================] - 18s 713ms/step - loss: 1.6568 - acc: 0.4357 - val_loss: 1.9031 - val_acc: 0.4048 Epoch 59/100 25/25 [==============================] - 18s 717ms/step - loss: 1.6176 - acc: 0.4458 - val_loss: 1.8997 - val_acc: 0.4085 Epoch 60/100 25/25 [==============================] - 18s 719ms/step - loss: 1.6040 - acc: 0.4541 - val_loss: 1.9091 - val_acc: 0.4104 Epoch 61/100 25/25 [==============================] - 18s 720ms/step - loss: 1.5799 - acc: 0.4586 - val_loss: 1.9074 - val_acc: 0.4111 Epoch 62/100 25/25 [==============================] - 18s 719ms/step - loss: 1.5517 - acc: 0.4695 - val_loss: 1.9046 - val_acc: 0.4197 Epoch 63/100 25/25 [==============================] - 18s 718ms/step - loss: 1.5253 - acc: 0.4792 - val_loss: 1.9144 - val_acc: 0.4130 Epoch 64/100 25/25 [==============================] - 18s 714ms/step - loss: 1.5125 - acc: 0.4796 - val_loss: 1.9055 - val_acc: 0.4204 Epoch 65/100 25/25 [==============================] - 18s 714ms/step - loss: 1.4956 - acc: 0.4808 - val_loss: 1.9364 - val_acc: 0.4104 Epoch 66/100 25/25 [==============================] - 18s 718ms/step - loss: 1.5004 - acc: 0.4762 - val_loss: 1.9205 - val_acc: 0.4193 Epoch 67/100 25/25 [==============================] - 18s 720ms/step - loss: 1.4674 - acc: 0.4869 - val_loss: 1.9152 - val_acc: 0.4156 Epoch 68/100 25/25 [==============================] - 18s 713ms/step - loss: 1.4696 - acc: 0.4845 - val_loss: 1.9438 - val_acc: 0.4089 Epoch 69/100 25/25 [==============================] - 18s 716ms/step - loss: 1.4466 - acc: 0.4885 - val_loss: 1.9301 - val_acc: 0.4167 Epoch 70/100 25/25 [==============================] - 18s 720ms/step - loss: 1.4222 - acc: 0.4953 - val_loss: 1.9413 - val_acc: 0.4171 Epoch 71/100 25/25 [==============================] - 18s 730ms/step - loss: 1.4031 - acc: 0.5015 - val_loss: 1.9453 - val_acc: 0.4160 Epoch 72/100 25/25 [==============================] - 18s 732ms/step - loss: 1.3913 - acc: 0.5025 - val_loss: 1.9652 - val_acc: 0.4126 Epoch 73/100 25/25 [==============================] - 18s 733ms/step - loss: 1.3743 - acc: 0.5076 - val_loss: 1.9656 - val_acc: 0.4167 Epoch 74/100 25/25 [==============================] - 18s 731ms/step - loss: 1.3698 - acc: 0.5060 - val_loss: 1.9539 - val_acc: 0.4163 Epoch 75/100 25/25 [==============================] - 18s 721ms/step - loss: 1.3641 - acc: 0.5099 - val_loss: 1.9622 - val_acc: 0.4171 Epoch 76/100 25/25 [==============================] - 18s 717ms/step - loss: 1.3867 - acc: 0.4991 - val_loss: 1.9543 - val_acc: 0.4193 Epoch 77/100 25/25 [==============================] - 18s 721ms/step - loss: 1.3579 - acc: 0.5099 - val_loss: 1.9713 - val_acc: 0.4156 Epoch 78/100 25/25 [==============================] - 18s 728ms/step - loss: 1.3547 - acc: 0.5101 - val_loss: 1.9766 - val_acc: 0.4193 Epoch 79/100 25/25 [==============================] - 18s 727ms/step - loss: 1.3270 - acc: 0.5204 - val_loss: 1.9746 - val_acc: 0.4193 Epoch 80/100 25/25 [==============================] - 18s 732ms/step - loss: 1.3145 - acc: 0.5257 - val_loss: 1.9682 - val_acc: 0.4263 Epoch 81/100 25/25 [==============================] - 18s 723ms/step - loss: 1.3036 - acc: 0.5336 - val_loss: 1.9810 - val_acc: 0.4197 Epoch 82/100 25/25 [==============================] - 18s 723ms/step - loss: 1.2914 - acc: 0.5403 - val_loss: 1.9810 - val_acc: 0.4241 Epoch 83/100 25/25 [==============================] - 18s 728ms/step - loss: 1.3314 - acc: 0.5165 - val_loss: 1.9978 - val_acc: 0.4171 Epoch 84/100 25/25 [==============================] - 18s 727ms/step - loss: 1.2899 - acc: 0.5418 - val_loss: 1.9834 - val_acc: 0.4278 Epoch 85/100 25/25 [==============================] - 18s 726ms/step - loss: 1.2672 - acc: 0.5542 - val_loss: 1.9789 - val_acc: 0.4282 Epoch 86/100 25/25 [==============================] - 18s 732ms/step - loss: 1.2491 - acc: 0.5650 - val_loss: 2.0017 - val_acc: 0.4278 Epoch 87/100 25/25 [==============================] - 18s 730ms/step - loss: 1.2462 - acc: 0.5664 - val_loss: 1.9928 - val_acc: 0.4293 Epoch 88/100 25/25 [==============================] - 18s 726ms/step - loss: 1.2272 - acc: 0.5737 - val_loss: 1.9974 - val_acc: 0.4278 Epoch 89/100 25/25 [==============================] - 18s 731ms/step - loss: 1.2364 - acc: 0.5639 - val_loss: 2.0167 - val_acc: 0.4204 Epoch 90/100 25/25 [==============================] - 18s 728ms/step - loss: 1.2149 - acc: 0.5755 - val_loss: 2.0101 - val_acc: 0.4215 Epoch 91/100 25/25 [==============================] - 18s 727ms/step - loss: 1.1995 - acc: 0.5818 - val_loss: 2.0138 - val_acc: 0.4260 Epoch 92/100 25/25 [==============================] - 18s 729ms/step - loss: 1.1940 - acc: 0.5812 - val_loss: 2.0216 - val_acc: 0.4241 Epoch 93/100 25/25 [==============================] - 18s 727ms/step - loss: 1.1946 - acc: 0.5771 - val_loss: 2.0263 - val_acc: 0.4263 Epoch 94/100 25/25 [==============================] - 18s 724ms/step - loss: 1.1794 - acc: 0.5822 - val_loss: 2.0400 - val_acc: 0.4245 Epoch 95/100 25/25 [==============================] - 18s 719ms/step - loss: 1.1677 - acc: 0.5847 - val_loss: 2.0384 - val_acc: 0.4256 Epoch 96/100 25/25 [==============================] - 18s 724ms/step - loss: 1.1630 - acc: 0.5868 - val_loss: 2.0458 - val_acc: 0.4263 Epoch 97/100 25/25 [==============================] - 18s 723ms/step - loss: 1.1504 - acc: 0.5880 - val_loss: 2.0503 - val_acc: 0.4230 Epoch 98/100 25/25 [==============================] - 18s 721ms/step - loss: 1.1404 - acc: 0.5933 - val_loss: 2.0504 - val_acc: 0.4226 Epoch 99/100 25/25 [==============================] - 18s 721ms/step - loss: 1.1285 - acc: 0.5954 - val_loss: 2.0449 - val_acc: 0.4275 Epoch 100/100 25/25 [==============================] - 18s 720ms/step - loss: 1.1357 - acc: 0.5906 - val_loss: 2.0691 - val_acc: 0.4263 ###Markdown Evaluation ###Code model.evaluate(pad_x_train, y_train) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_pred = np.argmax(y_train_pred, axis=1) y_pred.shape print(classification_report(y_train, y_pred)) ###Output _____no_output_____ ###Markdown test ###Code y_test_pred = model.predict(pad_x_test) y_pred2 = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred2)) ###Output _____no_output_____ ###Markdown Make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) # LSTM을 하나 더 쓸 때 model.add(tf.keras.layers.LSTM(12, activation='tanh')) # model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 20s 658ms/step - loss: 3.7195 - acc: 0.2704 - val_loss: 3.4674 - val_acc: 0.0479 Epoch 2/100 25/25 [==============================] - 16s 627ms/step - loss: 3.2134 - acc: 0.2591 - val_loss: 2.9466 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 16s 638ms/step - loss: 2.7723 - acc: 0.3510 - val_loss: 2.5808 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 16s 629ms/step - loss: 2.5263 - acc: 0.3510 - val_loss: 2.4439 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 16s 624ms/step - loss: 2.4537 - acc: 0.3510 - val_loss: 2.4077 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4323 - acc: 0.3510 - val_loss: 2.3950 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4233 - acc: 0.3510 - val_loss: 2.3891 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4190 - acc: 0.3510 - val_loss: 2.3863 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4164 - acc: 0.3510 - val_loss: 2.3847 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4149 - acc: 0.3510 - val_loss: 2.3839 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4139 - acc: 0.3510 - val_loss: 2.3833 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4133 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 16s 649ms/step - loss: 2.4129 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4126 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 16s 633ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 16s 634ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 16s 632ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4029 - acc: 0.3510 - val_loss: 2.4550 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 16s 651ms/step - loss: 2.4105 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 16s 648ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 16s 640ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 16s 647ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 16s 633ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 40/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 41/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 42/100 25/25 [==============================] - 16s 640ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 43/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 44/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 45/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 46/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 47/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 48/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 49/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 50/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 51/100 25/25 [==============================] - 16s 637ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 52/100 25/25 [==============================] - 16s 634ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 53/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4104 - acc: 0.3510 - val_loss: 2.3803 - val_acc: 0.3532 Epoch 54/100 25/25 [==============================] - 16s 645ms/step - loss: 2.4060 - acc: 0.3510 - val_loss: 2.3665 - val_acc: 0.3532 Epoch 55/100 25/25 [==============================] - 16s 639ms/step - loss: 2.3382 - acc: 0.3755 - val_loss: 2.2555 - val_acc: 0.4764 Epoch 56/100 25/25 [==============================] - 16s 648ms/step - loss: 2.2553 - acc: 0.3689 - val_loss: 2.1948 - val_acc: 0.4553 Epoch 57/100 25/25 [==============================] - 16s 644ms/step - loss: 2.1991 - acc: 0.4641 - val_loss: 2.1491 - val_acc: 0.4842 Epoch 58/100 25/25 [==============================] - 16s 644ms/step - loss: 2.1441 - acc: 0.5096 - val_loss: 2.0970 - val_acc: 0.5239 Epoch 59/100 25/25 [==============================] - 16s 644ms/step - loss: 2.0953 - acc: 0.5254 - val_loss: 2.0584 - val_acc: 0.5288 Epoch 60/100 25/25 [==============================] - 16s 639ms/step - loss: 2.0709 - acc: 0.5160 - val_loss: 2.0558 - val_acc: 0.5006 Epoch 61/100 25/25 [==============================] - 16s 642ms/step - loss: 2.0219 - acc: 0.5340 - val_loss: 2.0104 - val_acc: 0.5276 Epoch 62/100 25/25 [==============================] - 16s 641ms/step - loss: 1.9885 - acc: 0.5398 - val_loss: 1.9924 - val_acc: 0.5187 Epoch 63/100 25/25 [==============================] - 16s 632ms/step - loss: 1.9596 - acc: 0.5397 - val_loss: 1.9764 - val_acc: 0.5165 Epoch 64/100 25/25 [==============================] - 16s 631ms/step - loss: 1.9213 - acc: 0.5418 - val_loss: 1.9706 - val_acc: 0.5132 Epoch 65/100 25/25 [==============================] - 16s 629ms/step - loss: 1.8728 - acc: 0.5379 - val_loss: 1.9073 - val_acc: 0.4994 Epoch 66/100 25/25 [==============================] - 16s 643ms/step - loss: 1.7927 - acc: 0.5683 - val_loss: 1.8431 - val_acc: 0.5299 Epoch 67/100 25/25 [==============================] - 16s 640ms/step - loss: 1.7446 - acc: 0.5780 - val_loss: 1.8360 - val_acc: 0.5273 Epoch 68/100 25/25 [==============================] - 16s 642ms/step - loss: 1.6942 - acc: 0.5825 - val_loss: 1.8078 - val_acc: 0.5351 Epoch 69/100 25/25 [==============================] - 16s 636ms/step - loss: 1.6616 - acc: 0.5858 - val_loss: 1.8114 - val_acc: 0.5310 Epoch 70/100 25/25 [==============================] - 16s 647ms/step - loss: 1.6356 - acc: 0.5925 - val_loss: 1.7646 - val_acc: 0.5432 Epoch 71/100 25/25 [==============================] - 16s 652ms/step - loss: 1.5982 - acc: 0.6039 - val_loss: 1.7583 - val_acc: 0.5518 Epoch 72/100 25/25 [==============================] - 16s 648ms/step - loss: 1.5679 - acc: 0.6122 - val_loss: 1.7503 - val_acc: 0.5570 Epoch 73/100 25/25 [==============================] - 16s 653ms/step - loss: 1.5409 - acc: 0.6176 - val_loss: 1.7591 - val_acc: 0.5518 Epoch 74/100 25/25 [==============================] - 16s 652ms/step - loss: 1.5137 - acc: 0.6218 - val_loss: 1.7479 - val_acc: 0.5555 Epoch 75/100 25/25 [==============================] - 16s 646ms/step - loss: 1.4868 - acc: 0.6268 - val_loss: 1.7797 - val_acc: 0.5466 Epoch 76/100 25/25 [==============================] - 16s 643ms/step - loss: 1.4703 - acc: 0.6267 - val_loss: 1.7516 - val_acc: 0.5525 Epoch 77/100 25/25 [==============================] - 16s 649ms/step - loss: 1.4460 - acc: 0.6275 - val_loss: 1.7809 - val_acc: 0.5391 Epoch 78/100 25/25 [==============================] - 16s 648ms/step - loss: 1.4209 - acc: 0.6310 - val_loss: 1.7754 - val_acc: 0.5484 Epoch 79/100 25/25 [==============================] - 16s 652ms/step - loss: 1.4056 - acc: 0.6289 - val_loss: 1.7807 - val_acc: 0.5451 Epoch 80/100 25/25 [==============================] - 16s 648ms/step - loss: 1.3871 - acc: 0.6350 - val_loss: 1.7988 - val_acc: 0.5406 Epoch 81/100 25/25 [==============================] - 16s 650ms/step - loss: 1.3677 - acc: 0.6361 - val_loss: 1.7908 - val_acc: 0.5488 Epoch 82/100 25/25 [==============================] - 16s 651ms/step - loss: 1.3643 - acc: 0.6350 - val_loss: 1.8382 - val_acc: 0.5317 Epoch 83/100 25/25 [==============================] - 16s 643ms/step - loss: 1.3414 - acc: 0.6434 - val_loss: 1.7929 - val_acc: 0.5492 Epoch 84/100 25/25 [==============================] - 16s 646ms/step - loss: 1.3253 - acc: 0.6472 - val_loss: 1.8259 - val_acc: 0.5440 Epoch 85/100 25/25 [==============================] - 16s 654ms/step - loss: 1.3071 - acc: 0.6485 - val_loss: 1.8041 - val_acc: 0.5462 Epoch 86/100 25/25 [==============================] - 16s 644ms/step - loss: 1.2946 - acc: 0.6626 - val_loss: 1.8500 - val_acc: 0.5310 Epoch 87/100 25/25 [==============================] - 16s 644ms/step - loss: 1.2779 - acc: 0.6706 - val_loss: 1.8331 - val_acc: 0.5347 Epoch 88/100 25/25 [==============================] - 16s 640ms/step - loss: 1.2590 - acc: 0.6841 - val_loss: 1.8651 - val_acc: 0.5291 Epoch 89/100 25/25 [==============================] - 16s 639ms/step - loss: 1.2441 - acc: 0.6909 - val_loss: 1.8525 - val_acc: 0.5358 Epoch 90/100 25/25 [==============================] - 16s 645ms/step - loss: 1.2360 - acc: 0.6811 - val_loss: 1.8554 - val_acc: 0.5380 Epoch 91/100 25/25 [==============================] - 16s 634ms/step - loss: 1.2183 - acc: 0.6847 - val_loss: 1.8912 - val_acc: 0.5295 Epoch 92/100 25/25 [==============================] - 16s 640ms/step - loss: 1.2373 - acc: 0.6703 - val_loss: 1.8739 - val_acc: 0.5336 Epoch 93/100 25/25 [==============================] - 16s 643ms/step - loss: 1.1995 - acc: 0.6854 - val_loss: 1.9008 - val_acc: 0.5302 Epoch 94/100 25/25 [==============================] - 16s 636ms/step - loss: 1.1730 - acc: 0.6878 - val_loss: 1.8738 - val_acc: 0.5362 Epoch 95/100 25/25 [==============================] - 16s 640ms/step - loss: 1.1558 - acc: 0.6911 - val_loss: 1.9222 - val_acc: 0.5147 Epoch 96/100 25/25 [==============================] - 16s 638ms/step - loss: 1.1452 - acc: 0.6917 - val_loss: 1.9129 - val_acc: 0.5280 Epoch 97/100 25/25 [==============================] - 16s 640ms/step - loss: 1.1313 - acc: 0.6954 - val_loss: 1.8829 - val_acc: 0.5384 Epoch 98/100 25/25 [==============================] - 16s 641ms/step - loss: 1.1093 - acc: 0.6975 - val_loss: 1.9224 - val_acc: 0.5250 Epoch 99/100 25/25 [==============================] - 16s 640ms/step - loss: 1.0904 - acc: 0.7040 - val_loss: 1.9198 - val_acc: 0.5310 Epoch 100/100 25/25 [==============================] - 16s 641ms/step - loss: 1.0828 - acc: 0.7040 - val_loss: 1.9060 - val_acc: 0.5380 ###Markdown Evaluation ###Code # 학습시켰던 데이터 model.evaluate(pad_x_train, y_train) ###Output 281/281 [==============================] - 16s 56ms/step - loss: 1.3192 - acc: 0.6544 ###Markdown x_test 데이터 전처리 ###Code pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) ###Output 71/71 [==============================] - 4s 57ms/step - loss: 1.9766 - acc: 0.5419 ###Markdown train과 test의 acc 유사하기 때문에 학습이 잘 됨을 볼 수 있음 ###Code import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] y_pred = np.argmax(y_train_pred, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output precision recall f1-score support 0 0.00 0.00 0.00 12 1 0.17 0.53 0.26 105 2 0.00 0.00 0.00 20 3 0.93 0.89 0.91 813 4 0.70 0.72 0.71 474 5 0.00 0.00 0.00 5 6 0.00 0.00 0.00 14 7 0.00 0.00 0.00 3 8 0.00 0.00 0.00 38 9 0.00 0.00 0.00 25 10 0.07 0.37 0.11 30 11 0.00 0.00 0.00 83 12 0.00 0.00 0.00 13 13 0.00 0.00 0.00 37 14 0.00 0.00 0.00 2 15 0.00 0.00 0.00 9 16 0.11 0.22 0.15 99 17 0.00 0.00 0.00 12 18 0.00 0.00 0.00 20 19 0.23 0.47 0.31 133 20 0.33 0.04 0.08 70 21 0.00 0.00 0.00 27 22 0.00 0.00 0.00 7 23 0.00 0.00 0.00 12 24 0.00 0.00 0.00 19 25 0.00 0.00 0.00 31 26 0.00 0.00 0.00 8 27 0.00 0.00 0.00 4 28 0.00 0.00 0.00 10 29 0.00 0.00 0.00 4 30 0.00 0.00 0.00 12 31 0.00 0.00 0.00 13 32 0.00 0.00 0.00 10 33 0.00 0.00 0.00 5 34 0.00 0.00 0.00 7 35 0.00 0.00 0.00 6 36 0.00 0.00 0.00 11 37 0.00 0.00 0.00 2 38 0.00 0.00 0.00 3 39 0.00 0.00 0.00 5 40 0.00 0.00 0.00 10 41 0.00 0.00 0.00 8 42 0.00 0.00 0.00 3 43 0.00 0.00 0.00 6 44 0.00 0.00 0.00 5 45 0.00 0.00 0.00 1 accuracy 0.54 2246 macro avg 0.06 0.07 0.05 2246 weighted avg 0.52 0.54 0.52 2246 ###Markdown 비슷한 부분끼리 0임을 확인 -> words=10000 제한 때문에 0이 발생 ###Code ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train, y_train),(x_test,y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape, y_train.shape,x_test.shape,y_test.shape print(y_train[50], x_train[50]) len(x_train[50]), len(x_train[400]), len(x_train[200]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np np.unique(y_train).shape, np.unique(y_train) ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) # hidden layer model.add(tf.keras.layers.LSTM(12, activation='tanh')) # hidden layer # model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model. compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget # sparse는 y까지 데이터를 카테고리화 #hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 23s 774ms/step - loss: 3.7366 - acc: 0.3250 - val_loss: 3.4795 - val_acc: 0.3532 Epoch 2/100 25/25 [==============================] - 19s 744ms/step - loss: 3.1803 - acc: 0.3510 - val_loss: 2.8684 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 18s 737ms/step - loss: 2.7208 - acc: 0.3510 - val_loss: 2.5480 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 18s 741ms/step - loss: 2.5203 - acc: 0.3510 - val_loss: 2.4429 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 18s 742ms/step - loss: 2.4571 - acc: 0.3510 - val_loss: 2.4109 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 19s 747ms/step - loss: 2.4352 - acc: 0.3510 - val_loss: 2.3984 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 19s 751ms/step - loss: 2.4257 - acc: 0.3510 - val_loss: 2.3920 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 19s 752ms/step - loss: 2.4209 - acc: 0.3510 - val_loss: 2.3879 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 19s 749ms/step - loss: 2.4177 - acc: 0.3510 - val_loss: 2.3856 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 19s 744ms/step - loss: 2.4142 - acc: 0.3510 - val_loss: 2.3871 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 18s 737ms/step - loss: 2.4131 - acc: 0.3510 - val_loss: 2.3772 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 18s 742ms/step - loss: 2.3308 - acc: 0.3510 - val_loss: 2.2559 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 19s 747ms/step - loss: 2.2084 - acc: 0.3526 - val_loss: 2.1485 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 19s 745ms/step - loss: 2.0994 - acc: 0.3514 - val_loss: 2.0698 - val_acc: 0.3544 Epoch 15/100 25/25 [==============================] - 19s 757ms/step - loss: 2.0212 - acc: 0.3587 - val_loss: 2.0341 - val_acc: 0.3733 Epoch 16/100 25/25 [==============================] - 19s 752ms/step - loss: 1.9574 - acc: 0.3808 - val_loss: 1.9973 - val_acc: 0.3848 Epoch 17/100 25/25 [==============================] - 19s 746ms/step - loss: 1.8989 - acc: 0.4032 - val_loss: 1.9978 - val_acc: 0.3874 Epoch 18/100 25/25 [==============================] - 18s 739ms/step - loss: 1.8481 - acc: 0.4142 - val_loss: 1.9746 - val_acc: 0.3933 Epoch 19/100 25/25 [==============================] - 19s 747ms/step - loss: 1.8027 - acc: 0.4215 - val_loss: 1.9754 - val_acc: 0.3948 Epoch 20/100 25/25 [==============================] - 19s 746ms/step - loss: 1.7762 - acc: 0.4237 - val_loss: 1.9695 - val_acc: 0.3996 Epoch 21/100 25/25 [==============================] - 19s 745ms/step - loss: 1.7432 - acc: 0.4368 - val_loss: 1.9826 - val_acc: 0.3974 Epoch 22/100 25/25 [==============================] - 19s 751ms/step - loss: 1.7034 - acc: 0.4439 - val_loss: 1.9706 - val_acc: 0.3952 Epoch 23/100 25/25 [==============================] - 19s 749ms/step - loss: 1.6565 - acc: 0.4536 - val_loss: 1.9813 - val_acc: 0.3952 Epoch 24/100 25/25 [==============================] - 19s 750ms/step - loss: 1.6221 - acc: 0.4619 - val_loss: 2.0095 - val_acc: 0.3981 Epoch 25/100 25/25 [==============================] - 18s 731ms/step - loss: 1.5893 - acc: 0.4727 - val_loss: 2.0272 - val_acc: 0.3981 Epoch 26/100 25/25 [==============================] - 18s 741ms/step - loss: 1.5577 - acc: 0.4842 - val_loss: 2.0227 - val_acc: 0.4004 Epoch 27/100 25/25 [==============================] - 19s 754ms/step - loss: 1.5340 - acc: 0.4880 - val_loss: 2.0401 - val_acc: 0.3985 Epoch 28/100 25/25 [==============================] - 19s 753ms/step - loss: 1.5139 - acc: 0.4894 - val_loss: 2.0327 - val_acc: 0.4004 Epoch 29/100 25/25 [==============================] - 18s 738ms/step - loss: 1.4954 - acc: 0.4985 - val_loss: 2.0792 - val_acc: 0.3974 Epoch 30/100 25/25 [==============================] - 18s 740ms/step - loss: 1.4783 - acc: 0.4983 - val_loss: 2.0667 - val_acc: 0.3985 Epoch 31/100 25/25 [==============================] - 18s 741ms/step - loss: 1.4505 - acc: 0.5069 - val_loss: 2.0742 - val_acc: 0.4052 Epoch 32/100 25/25 [==============================] - 18s 741ms/step - loss: 1.4478 - acc: 0.5055 - val_loss: 2.0840 - val_acc: 0.4004 Epoch 33/100 25/25 [==============================] - 18s 741ms/step - loss: 1.4216 - acc: 0.5133 - val_loss: 2.1160 - val_acc: 0.4019 Epoch 34/100 25/25 [==============================] - 18s 742ms/step - loss: 1.4054 - acc: 0.5198 - val_loss: 2.1225 - val_acc: 0.4011 Epoch 35/100 25/25 [==============================] - 19s 753ms/step - loss: 1.3910 - acc: 0.5239 - val_loss: 2.1216 - val_acc: 0.4063 Epoch 36/100 25/25 [==============================] - 19s 745ms/step - loss: 1.3943 - acc: 0.5166 - val_loss: 2.1394 - val_acc: 0.4004 Epoch 37/100 25/25 [==============================] - 18s 738ms/step - loss: 1.3805 - acc: 0.5258 - val_loss: 2.1626 - val_acc: 0.4026 Epoch 38/100 25/25 [==============================] - 18s 740ms/step - loss: 1.3627 - acc: 0.5257 - val_loss: 2.1463 - val_acc: 0.3985 Epoch 39/100 25/25 [==============================] - 18s 737ms/step - loss: 1.3322 - acc: 0.5341 - val_loss: 2.1663 - val_acc: 0.4048 Epoch 40/100 25/25 [==============================] - 18s 741ms/step - loss: 1.3233 - acc: 0.5371 - val_loss: 2.1654 - val_acc: 0.4045 Epoch 41/100 25/25 [==============================] - 18s 739ms/step - loss: 1.3018 - acc: 0.5437 - val_loss: 2.1606 - val_acc: 0.4089 Epoch 42/100 25/25 [==============================] - 18s 739ms/step - loss: 1.2919 - acc: 0.5464 - val_loss: 2.1792 - val_acc: 0.4089 Epoch 43/100 25/25 [==============================] - 18s 731ms/step - loss: 1.2732 - acc: 0.5556 - val_loss: 2.2062 - val_acc: 0.4085 Epoch 44/100 25/25 [==============================] - 18s 741ms/step - loss: 1.2690 - acc: 0.5548 - val_loss: 2.2083 - val_acc: 0.4111 Epoch 45/100 25/25 [==============================] - 18s 736ms/step - loss: 1.2549 - acc: 0.5564 - val_loss: 2.2032 - val_acc: 0.4122 Epoch 46/100 25/25 [==============================] - 19s 746ms/step - loss: 1.2388 - acc: 0.5616 - val_loss: 2.2042 - val_acc: 0.4134 Epoch 47/100 25/25 [==============================] - 19s 744ms/step - loss: 1.2225 - acc: 0.5707 - val_loss: 2.2223 - val_acc: 0.4111 Epoch 48/100 25/25 [==============================] - 19s 752ms/step - loss: 1.2119 - acc: 0.5740 - val_loss: 2.2382 - val_acc: 0.4137 Epoch 49/100 25/25 [==============================] - 19s 744ms/step - loss: 1.2190 - acc: 0.5672 - val_loss: 2.2256 - val_acc: 0.4111 Epoch 50/100 25/25 [==============================] - 18s 732ms/step - loss: 1.2469 - acc: 0.5530 - val_loss: 2.2470 - val_acc: 0.4122 Epoch 51/100 25/25 [==============================] - 18s 729ms/step - loss: 1.2022 - acc: 0.5653 - val_loss: 2.2597 - val_acc: 0.4115 Epoch 52/100 25/25 [==============================] - 18s 735ms/step - loss: 1.1719 - acc: 0.5820 - val_loss: 2.2480 - val_acc: 0.4130 Epoch 53/100 25/25 [==============================] - 18s 738ms/step - loss: 1.1519 - acc: 0.5911 - val_loss: 2.2681 - val_acc: 0.4130 Epoch 54/100 25/25 [==============================] - 18s 735ms/step - loss: 1.1356 - acc: 0.5985 - val_loss: 2.2658 - val_acc: 0.4134 Epoch 55/100 25/25 [==============================] - 18s 742ms/step - loss: 1.1276 - acc: 0.6028 - val_loss: 2.2693 - val_acc: 0.4119 Epoch 56/100 25/25 [==============================] - 18s 732ms/step - loss: 1.1170 - acc: 0.6039 - val_loss: 2.2718 - val_acc: 0.4186 Epoch 57/100 25/25 [==============================] - 18s 727ms/step - loss: 1.1273 - acc: 0.5977 - val_loss: 2.3145 - val_acc: 0.4082 Epoch 58/100 25/25 [==============================] - 18s 735ms/step - loss: 1.1150 - acc: 0.6017 - val_loss: 2.2854 - val_acc: 0.4171 Epoch 59/100 25/25 [==============================] - 18s 743ms/step - loss: 1.0920 - acc: 0.6106 - val_loss: 2.2983 - val_acc: 0.4167 Epoch 60/100 25/25 [==============================] - 18s 742ms/step - loss: 1.0721 - acc: 0.6208 - val_loss: 2.3080 - val_acc: 0.4137 Epoch 61/100 25/25 [==============================] - 18s 734ms/step - loss: 1.0591 - acc: 0.6268 - val_loss: 2.3134 - val_acc: 0.4163 Epoch 62/100 25/25 [==============================] - 19s 743ms/step - loss: 1.0622 - acc: 0.6237 - val_loss: 2.3269 - val_acc: 0.4182 Epoch 63/100 25/25 [==============================] - 19s 745ms/step - loss: 1.0444 - acc: 0.6296 - val_loss: 2.3203 - val_acc: 0.4171 Epoch 64/100 25/25 [==============================] - 19s 749ms/step - loss: 1.0339 - acc: 0.6345 - val_loss: 2.3400 - val_acc: 0.4163 Epoch 65/100 25/25 [==============================] - 18s 726ms/step - loss: 1.0173 - acc: 0.6437 - val_loss: 2.3527 - val_acc: 0.4152 Epoch 66/100 25/25 [==============================] - 19s 750ms/step - loss: 1.0080 - acc: 0.6418 - val_loss: 2.3482 - val_acc: 0.4186 Epoch 67/100 25/25 [==============================] - 19s 744ms/step - loss: 0.9952 - acc: 0.6493 - val_loss: 2.3518 - val_acc: 0.4163 Epoch 68/100 25/25 [==============================] - 19s 746ms/step - loss: 0.9862 - acc: 0.6501 - val_loss: 2.3618 - val_acc: 0.4156 Epoch 69/100 25/25 [==============================] - 18s 733ms/step - loss: 0.9772 - acc: 0.6539 - val_loss: 2.3412 - val_acc: 0.4174 Epoch 70/100 25/25 [==============================] - 18s 739ms/step - loss: 0.9839 - acc: 0.6494 - val_loss: 2.3480 - val_acc: 0.4178 Epoch 71/100 25/25 [==============================] - 18s 735ms/step - loss: 0.9797 - acc: 0.6504 - val_loss: 2.3769 - val_acc: 0.4148 Epoch 72/100 25/25 [==============================] - 19s 750ms/step - loss: 0.9784 - acc: 0.6498 - val_loss: 2.3582 - val_acc: 0.4212 Epoch 73/100 25/25 [==============================] - 19s 750ms/step - loss: 0.9659 - acc: 0.6528 - val_loss: 2.3571 - val_acc: 0.4245 Epoch 74/100 25/25 [==============================] - 19s 751ms/step - loss: 0.9597 - acc: 0.6555 - val_loss: 2.3807 - val_acc: 0.4223 Epoch 75/100 25/25 [==============================] - 19s 755ms/step - loss: 0.9509 - acc: 0.6561 - val_loss: 2.3867 - val_acc: 0.4200 Epoch 76/100 25/25 [==============================] - 18s 735ms/step - loss: 0.9396 - acc: 0.6603 - val_loss: 2.3989 - val_acc: 0.4174 Epoch 77/100 25/25 [==============================] - 18s 733ms/step - loss: 0.9399 - acc: 0.6587 - val_loss: 2.3478 - val_acc: 0.4271 Epoch 78/100 25/25 [==============================] - 18s 741ms/step - loss: 0.9265 - acc: 0.6658 - val_loss: 2.3876 - val_acc: 0.4260 Epoch 79/100 25/25 [==============================] - 19s 754ms/step - loss: 0.9109 - acc: 0.6715 - val_loss: 2.3793 - val_acc: 0.4282 Epoch 80/100 25/25 [==============================] - 19s 759ms/step - loss: 0.8995 - acc: 0.6754 - val_loss: 2.4562 - val_acc: 0.4148 Epoch 81/100 25/25 [==============================] - 19s 750ms/step - loss: 0.9052 - acc: 0.6730 - val_loss: 2.4305 - val_acc: 0.4200 Epoch 82/100 25/25 [==============================] - 18s 741ms/step - loss: 0.9080 - acc: 0.6731 - val_loss: 2.3791 - val_acc: 0.4308 Epoch 83/100 25/25 [==============================] - 18s 744ms/step - loss: 0.8993 - acc: 0.6746 - val_loss: 2.4070 - val_acc: 0.4249 Epoch 84/100 25/25 [==============================] - 18s 738ms/step - loss: 0.8818 - acc: 0.6817 - val_loss: 2.4243 - val_acc: 0.4234 Epoch 85/100 25/25 [==============================] - 19s 747ms/step - loss: 0.8672 - acc: 0.6859 - val_loss: 2.4078 - val_acc: 0.4312 Epoch 86/100 25/25 [==============================] - 19s 754ms/step - loss: 0.8576 - acc: 0.6895 - val_loss: 2.4118 - val_acc: 0.4323 Epoch 87/100 25/25 [==============================] - 19s 748ms/step - loss: 0.8520 - acc: 0.6909 - val_loss: 2.4273 - val_acc: 0.4271 Epoch 88/100 25/25 [==============================] - 19s 746ms/step - loss: 0.8465 - acc: 0.6946 - val_loss: 2.4359 - val_acc: 0.4289 Epoch 89/100 25/25 [==============================] - 18s 738ms/step - loss: 0.8439 - acc: 0.6944 - val_loss: 2.4444 - val_acc: 0.4275 Epoch 90/100 25/25 [==============================] - 18s 735ms/step - loss: 0.8366 - acc: 0.6957 - val_loss: 2.4353 - val_acc: 0.4297 Epoch 91/100 25/25 [==============================] - 19s 744ms/step - loss: 0.8364 - acc: 0.6967 - val_loss: 2.4390 - val_acc: 0.4334 Epoch 92/100 25/25 [==============================] - 19s 744ms/step - loss: 0.8379 - acc: 0.6943 - val_loss: 2.4737 - val_acc: 0.4267 Epoch 93/100 25/25 [==============================] - 19s 745ms/step - loss: 0.8428 - acc: 0.6892 - val_loss: 2.4346 - val_acc: 0.4360 Epoch 94/100 25/25 [==============================] - 18s 742ms/step - loss: 0.8341 - acc: 0.6878 - val_loss: 2.4982 - val_acc: 0.4249 Epoch 95/100 25/25 [==============================] - 19s 745ms/step - loss: 0.8275 - acc: 0.6943 - val_loss: 2.4671 - val_acc: 0.4289 Epoch 96/100 25/25 [==============================] - 18s 736ms/step - loss: 0.8222 - acc: 0.6973 - val_loss: 2.4893 - val_acc: 0.4252 Epoch 97/100 25/25 [==============================] - 18s 729ms/step - loss: 0.8170 - acc: 0.6981 - val_loss: 2.4721 - val_acc: 0.4308 Epoch 98/100 25/25 [==============================] - 19s 745ms/step - loss: 0.8110 - acc: 0.6991 - val_loss: 2.5120 - val_acc: 0.4252 Epoch 99/100 25/25 [==============================] - 19s 751ms/step - loss: 0.8183 - acc: 0.6959 - val_loss: 2.5533 - val_acc: 0.4204 Epoch 100/100 25/25 [==============================] - 19s 752ms/step - loss: 0.8205 - acc: 0.6929 - val_loss: 2.5484 - val_acc: 0.4200 ###Markdown Evaluation ###Code # 학습 시켰던 데이터 model.evaluate(pad_x_train, y_train) # loss: 2.4030 - acc: 0.3517 # x_test 전처리 pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) # pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) # pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) # 이러한 전처리를 function 화 시키기 def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) # 학습 시키지 않은 데이터 ( 함수 사용한 변수이용 ) model.evaluate(pad_make_x, y_test) # 학습 시키지 않은 데이터 model.evaluate(pad_x_test, y_test) # loss: 2.4158 - acc: 0.3620 # 과적합인가? 확실하게 판단 --> plot 그리기 import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss'],'r-') plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc'],'r-') plt.show() # 'r-' 넣기 전 --> acc가 정상작동하지 않아서 이상한 결과값이 나옴 from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] import numpy as np y_pred = np.argmax(y_train_pred,axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output precision recall f1-score support 0 0.09 0.17 0.12 12 1 0.38 0.19 0.25 105 2 0.03 0.05 0.04 20 3 0.63 0.92 0.75 813 4 0.11 0.00 0.01 474 5 0.00 0.00 0.00 5 6 0.08 0.07 0.08 14 7 1.00 0.33 0.50 3 8 0.37 0.18 0.25 38 9 0.16 0.20 0.18 25 10 0.00 0.00 0.00 30 11 0.18 0.35 0.23 83 12 0.25 0.08 0.12 13 13 0.03 0.05 0.04 37 14 0.00 0.00 0.00 2 15 0.00 0.00 0.00 9 16 0.21 0.26 0.24 99 17 0.00 0.00 0.00 12 18 0.11 0.10 0.11 20 19 0.34 0.40 0.37 133 20 0.10 0.27 0.15 70 21 0.18 0.11 0.14 27 22 0.00 0.00 0.00 7 23 0.00 0.00 0.00 12 24 0.12 0.05 0.07 19 25 0.62 0.16 0.26 31 26 0.00 0.00 0.00 8 27 0.20 0.25 0.22 4 28 0.00 0.00 0.00 10 29 0.00 0.00 0.00 4 30 0.00 0.00 0.00 12 31 0.17 0.08 0.11 13 32 0.00 0.00 0.00 10 33 0.00 0.00 0.00 5 34 0.00 0.00 0.00 7 35 0.00 0.00 0.00 6 36 0.25 0.09 0.13 11 37 0.00 0.00 0.00 2 38 0.00 0.00 0.00 3 39 0.00 0.00 0.00 5 40 0.15 0.20 0.17 10 41 0.00 0.00 0.00 8 42 0.00 0.00 0.00 3 43 0.00 0.00 0.00 6 44 0.00 0.00 0.00 5 45 0.00 0.00 0.00 1 accuracy 0.42 2246 macro avg 0.13 0.10 0.10 2246 weighted avg 0.34 0.42 0.35 2246 ###Markdown Service ###Code # 문장 입력 # --> 문장을 숫자로( 사전을 기준으로 ) --> [, , , , ...] (스칼라(딕셔너리)) --> pad_sequence model.predict() ###Output _____no_output_____ ###Markdown ###Code import tensorflow as tf (x_train,y_train),(x_test,y_test) = tf.keras.datasets.reuters.load_data(num_words=10000) x_train.shape,y_train.shape,x_test.shape,y_test.shape print(y_train[50], x_train[50]) len(x_train[50]), len(x_train[400]), len(x_train[200]), len(x_train[600]) pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500) len(pad_x_train[50]) import numpy as np len(np.unique(y_train)), np.unique(y_train).shape ###Output _____no_output_____ ###Markdown make model ###Code model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer # 벡터 방식으로 변환시켜줘야 한다. 이 때 input_dim은 사전의 사이즈, output_dim은 차원을 지정 model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) model.add(tf.keras.layers.LSTM(12, activation='tanh')) # model.add(tf.keras.layers.Flatten()) # hidden layer # dense는 1차원으로 들어와야 한다. 그래서 1차원으로 변형시켜줘야 하는데 이런 역할을 하는 것이 Flatten이다. model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) #gadget # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) # epochs는 default가 1이다. 따라서 아무것도 안쓰면 1번만 실행 hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256) ###Output Epoch 1/100 25/25 [==============================] - 22s 738ms/step - loss: 3.7332 - acc: 0.1675 - val_loss: 3.4466 - val_acc: 0.2215 Epoch 2/100 25/25 [==============================] - 18s 706ms/step - loss: 3.2134 - acc: 0.2150 - val_loss: 2.9539 - val_acc: 0.2215 Epoch 3/100 25/25 [==============================] - 18s 708ms/step - loss: 2.8364 - acc: 0.2150 - val_loss: 2.6693 - val_acc: 0.2215 Epoch 4/100 25/25 [==============================] - 18s 707ms/step - loss: 2.6251 - acc: 0.2150 - val_loss: 2.5196 - val_acc: 0.2215 Epoch 5/100 25/25 [==============================] - 17s 701ms/step - loss: 2.5121 - acc: 0.2430 - val_loss: 2.4440 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 18s 708ms/step - loss: 2.4581 - acc: 0.3510 - val_loss: 2.4115 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 18s 707ms/step - loss: 2.4358 - acc: 0.3510 - val_loss: 2.3967 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4253 - acc: 0.3510 - val_loss: 2.3896 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 17s 702ms/step - loss: 2.4201 - acc: 0.3510 - val_loss: 2.3861 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4172 - acc: 0.3510 - val_loss: 2.3841 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4154 - acc: 0.3510 - val_loss: 2.3834 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 18s 703ms/step - loss: 2.4142 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4134 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 18s 703ms/step - loss: 2.4130 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 17s 698ms/step - loss: 2.4126 - acc: 0.3510 - val_loss: 2.3815 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 18s 703ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3817 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 17s 701ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3817 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 17s 701ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 18s 707ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 18s 709ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 18s 708ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 18s 703ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 17s 702ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 17s 702ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 17s 699ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 17s 701ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 17s 702ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 17s 701ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 17s 699ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 40/100 25/25 [==============================] - 17s 701ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 41/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 42/100 25/25 [==============================] - 18s 705ms/step - loss: 2.4121 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 43/100 25/25 [==============================] - 17s 703ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 44/100 25/25 [==============================] - 17s 698ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3828 - val_acc: 0.3532 Epoch 45/100 25/25 [==============================] - 18s 705ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 46/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 47/100 25/25 [==============================] - 18s 703ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 48/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 49/100 25/25 [==============================] - 18s 711ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 50/100 25/25 [==============================] - 18s 709ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 51/100 25/25 [==============================] - 18s 707ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 52/100 25/25 [==============================] - 18s 706ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 53/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 54/100 25/25 [==============================] - 17s 699ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 55/100 25/25 [==============================] - 18s 713ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 56/100 25/25 [==============================] - 17s 700ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3827 - val_acc: 0.3532 Epoch 57/100 25/25 [==============================] - 18s 704ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 58/100 25/25 [==============================] - 18s 705ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 59/100 25/25 [==============================] - 18s 710ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 60/100 25/25 [==============================] - 18s 709ms/step - loss: 2.4113 - acc: 0.3510 - val_loss: 2.3792 - val_acc: 0.3532 Epoch 61/100 25/25 [==============================] - 18s 712ms/step - loss: 2.4111 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 62/100 25/25 [==============================] - 18s 713ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 63/100 25/25 [==============================] - 18s 707ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 64/100 25/25 [==============================] - 18s 711ms/step - loss: 2.4111 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 65/100 25/25 [==============================] - 18s 717ms/step - loss: 2.4109 - acc: 0.3510 - val_loss: 2.3807 - val_acc: 0.3532 Epoch 66/100 25/25 [==============================] - 18s 719ms/step - loss: 2.3894 - acc: 0.3510 - val_loss: 2.2786 - val_acc: 0.3532 Epoch 67/100 25/25 [==============================] - 18s 710ms/step - loss: 2.2800 - acc: 0.3518 - val_loss: 2.1997 - val_acc: 0.3536 Epoch 68/100 25/25 [==============================] - 18s 707ms/step - loss: 2.1225 - acc: 0.3525 - val_loss: 2.1012 - val_acc: 0.3532 Epoch 69/100 25/25 [==============================] - 18s 706ms/step - loss: 2.0070 - acc: 0.3534 - val_loss: 2.0062 - val_acc: 0.4071 Epoch 70/100 25/25 [==============================] - 18s 708ms/step - loss: 1.8943 - acc: 0.4473 - val_loss: 1.9359 - val_acc: 0.5035 Epoch 71/100 25/25 [==============================] - 18s 707ms/step - loss: 1.7927 - acc: 0.5341 - val_loss: 1.8956 - val_acc: 0.5150 Epoch 72/100 25/25 [==============================] - 18s 704ms/step - loss: 1.7017 - acc: 0.5616 - val_loss: 1.8665 - val_acc: 0.5336 Epoch 73/100 25/25 [==============================] - 18s 706ms/step - loss: 1.6445 - acc: 0.5626 - val_loss: 1.8399 - val_acc: 0.5195 Epoch 74/100 25/25 [==============================] - 18s 711ms/step - loss: 1.5767 - acc: 0.5806 - val_loss: 1.7968 - val_acc: 0.5477 Epoch 75/100 25/25 [==============================] - 18s 711ms/step - loss: 1.5209 - acc: 0.5979 - val_loss: 1.7935 - val_acc: 0.5429 Epoch 76/100 25/25 [==============================] - 18s 709ms/step - loss: 1.4708 - acc: 0.6087 - val_loss: 1.7657 - val_acc: 0.5510 Epoch 77/100 25/25 [==============================] - 18s 708ms/step - loss: 1.4268 - acc: 0.6191 - val_loss: 1.7700 - val_acc: 0.5551 Epoch 78/100 25/25 [==============================] - 18s 712ms/step - loss: 1.3911 - acc: 0.6254 - val_loss: 1.7580 - val_acc: 0.5506 Epoch 79/100 25/25 [==============================] - 18s 706ms/step - loss: 1.3597 - acc: 0.6308 - val_loss: 1.7416 - val_acc: 0.5577 Epoch 80/100 25/25 [==============================] - 18s 708ms/step - loss: 1.3221 - acc: 0.6463 - val_loss: 1.7298 - val_acc: 0.5744 Epoch 81/100 25/25 [==============================] - 17s 701ms/step - loss: 1.2850 - acc: 0.6588 - val_loss: 1.7265 - val_acc: 0.5763 Epoch 82/100 25/25 [==============================] - 18s 703ms/step - loss: 1.2534 - acc: 0.6741 - val_loss: 1.7330 - val_acc: 0.5677 Epoch 83/100 25/25 [==============================] - 18s 705ms/step - loss: 1.2251 - acc: 0.6827 - val_loss: 1.7276 - val_acc: 0.5759 Epoch 84/100 25/25 [==============================] - 18s 705ms/step - loss: 1.1881 - acc: 0.7043 - val_loss: 1.7203 - val_acc: 0.5763 Epoch 85/100 25/25 [==============================] - 18s 706ms/step - loss: 1.1528 - acc: 0.7175 - val_loss: 1.7192 - val_acc: 0.5807 Epoch 86/100 25/25 [==============================] - 18s 711ms/step - loss: 1.1181 - acc: 0.7280 - val_loss: 1.7384 - val_acc: 0.5740 Epoch 87/100 25/25 [==============================] - 18s 711ms/step - loss: 1.0912 - acc: 0.7388 - val_loss: 1.7345 - val_acc: 0.5837 Epoch 88/100 25/25 [==============================] - 18s 716ms/step - loss: 1.0652 - acc: 0.7466 - val_loss: 1.7003 - val_acc: 0.5870 Epoch 89/100 25/25 [==============================] - 18s 720ms/step - loss: 1.0322 - acc: 0.7578 - val_loss: 1.7053 - val_acc: 0.5937 Epoch 90/100 25/25 [==============================] - 18s 714ms/step - loss: 0.9936 - acc: 0.7727 - val_loss: 1.6881 - val_acc: 0.6015 Epoch 91/100 25/25 [==============================] - 18s 715ms/step - loss: 0.9715 - acc: 0.7775 - val_loss: 1.7029 - val_acc: 0.5926 Epoch 92/100 25/25 [==============================] - 18s 713ms/step - loss: 0.9377 - acc: 0.7864 - val_loss: 1.6981 - val_acc: 0.6015 Epoch 93/100 25/25 [==============================] - 18s 711ms/step - loss: 0.9204 - acc: 0.7877 - val_loss: 1.7004 - val_acc: 0.6004 Epoch 94/100 25/25 [==============================] - 18s 711ms/step - loss: 0.8892 - acc: 0.7940 - val_loss: 1.7046 - val_acc: 0.5989 Epoch 95/100 25/25 [==============================] - 18s 711ms/step - loss: 0.8793 - acc: 0.7923 - val_loss: 1.7327 - val_acc: 0.5900 Epoch 96/100 25/25 [==============================] - 18s 715ms/step - loss: 0.8554 - acc: 0.8012 - val_loss: 1.7222 - val_acc: 0.6041 Epoch 97/100 25/25 [==============================] - 18s 711ms/step - loss: 0.8364 - acc: 0.8034 - val_loss: 1.7642 - val_acc: 0.5944 Epoch 98/100 25/25 [==============================] - 18s 712ms/step - loss: 0.8161 - acc: 0.8088 - val_loss: 1.7426 - val_acc: 0.6030 Epoch 99/100 25/25 [==============================] - 18s 712ms/step - loss: 0.8006 - acc: 0.8136 - val_loss: 1.7607 - val_acc: 0.6000 Epoch 100/100 25/25 [==============================] - 18s 708ms/step - loss: 0.7828 - acc: 0.8179 - val_loss: 1.7394 - val_acc: 0.6041 ###Markdown Evaluation ###Code # 학습시켰던 데이터 model.evaluate(pad_x_train, y_train) ###Output 281/281 [==============================] - 17s 62ms/step - loss: 1.0593 - acc: 0.7555 ###Markdown * LSTM(24), epochs : 1 ---> loss: 2.2707 - acc: 0.3517* LSTM(24), LSTM(12), epochs : 1 ---> loss: 3.1497 - acc: 0.3517* LSTM(24), LSTM(12), epochs : 5 ---> loss: 2.4067 - acc: 0.3517* LSTM(24), LSTM(12), epochs : 5 ---> loss: 1.0593 - acc: 0.7555 ###Code pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test) # acc 보니 학습이 잘되었다고 볼 수 있다. import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss'], 'r-') plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc'], 'r-') plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] import numpy as np y_pred = np.argmax(y_train_pred, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred)) ###Output _____no_output_____
IN102_11_Enum_Macro.ipynb
###Markdown Sommaire1&nbsp;&nbsp;Types enumérés2&nbsp;&nbsp;Constantes littérales ###Code !pip install git+git://github.com/frehseg/gcc4jupyter %load_ext gcc_plugin ###Output _____no_output_____ ###Markdown Types enumérésLes types enumérés représentent des valeurs choisies parmis un (petit) ensemble, par exemple :- nord, est, sud, ouest,- coeur, carreau, trèfle, pique,- admis, refusé, indéterminé.Afin de représenter ces valeurs dans un programme, il faut associer chaque valeur à un nombre. On pourrait choisir des valeurs entières :- nord = 0, est = 1, sud = 2, ouest = 3,- coeur = 0, carreau = 1, trèfle = 2, pique = 3,- admis = 0, refusé = 1, indéterminé = 2.Ensuite on pourrait les traiter comme des entiers dans le programme:```int d = 2; // on commence avec le sud...if (d == 3) { // vers l'ouest printf("Ce n'est pas par là.");}```En revanche, il est pénible et sujette à erreurs de se souvenir des différentes nombres, surtout dans un grand programme qui est écrit par plusieurs personnes. En C, peut demander au compilateur de faire ce travail pour nous, en **déclarant un type `enum`**:`enum` nom-type `{`nom-valeur1 `,` nom-valeur2 `,`... `};`Par défaut, le compilateur va associer nom-valeur1 avec 0, nom-valeur2 avec 1, etc. Le code devient beaucoup plus lisible et plus facile à modifier :```enum direction { NORD, EST, SUD, OUEST };enum direction d = SUD; // on commence avec le sud...if (d == OUEST) { // vers l'ouest printf("Ce n'est pas par là.");}``` Voici un petit exemple: ###Code %%c #include <stdio.h> enum direction { NORD, EST, SUD, OUEST }; int main(void) { enum direction d = SUD; // on commence avec le sud if (d == OUEST) { // vers l'ouest printf("Ce n'est pas par là.\n"); } else if (d == OUEST) { // vers l'ouest printf("Ce n'est pas par là.\n"); } else { printf("Par ici c'est bon.\n"); } printf("entier associé à NORD: %d\n",NORD); printf("entier associé à EST: %d\n",EST); printf("entier associé à SUD: %d\n",SUD); printf("entier associé à OUEST: %d\n",OUEST); return 0; } ###Output _____no_output_____ ###Markdown On peut utiliser les types `enum` commes les autres types, par exemple dans un tableau ou dans une fonction: ###Code %%c #include <stdio.h> enum direction { NORD, EST, SUD, OUEST }; enum direction opposee(enum direction d) { if (d == NORD) { return SUD; } else if (d == EST) { return EST; } else if (d == SUD) { return NORD; } else { return OUEST; } } int main(void) { enum direction d1 = SUD; // on commence avec le sud // changer de sens enum direction d2 = opposee(d1); printf("l'opposée de SUD: %d\n",d2); printf("entier associé à NORD: %d\n",NORD); printf("entier associé à EST: %d\n",EST); printf("entier associé à SUD: %d\n",SUD); printf("entier associé à OUEST: %d\n",OUEST); return 0; } ###Output _____no_output_____ ###Markdown Pour afficher un type `enum` de façon plus lisible, on peut les associer avec un tableau de chaînes de caractères : ###Code %%c #include <stdio.h> enum direction { NORD, EST, SUD, OUEST }; char* direction_chaine[] = { "Nord", "Est", "Sud", "Ouest" }; enum direction opposee(enum direction d) { if (d == NORD) { return SUD; } else if (d == EST) { return EST; } else if (d == SUD) { return NORD; } else { return OUEST; } } int main(void) { enum direction d1 = SUD; // on commence avec le sud // changer de sens enum direction d2 = opposee(d1); printf("l'opposée de %s est %s\n", direction_chaine[d1], direction_chaine[d2] ); return 0; } ###Output _____no_output_____ ###Markdown Constantes littéralesSi on utilise un nombre constant partout dans le programme, il est préférable de la remplacer par un **macro** qui l'associe à un nom.Un exemple d'un programme qui utilise un paramètre partout qui pour l'instant vaut `10`: ###Code %%c #include <stdio.h> void ligne() { for (int i=0;i<10;++i) { printf("*"); } } int main() { for (int i=0;i<10;++i) { ligne(); printf("\n"); } } ###Output _____no_output_____ ###Markdown Si on veut remplacer 10 par 20, il est facile de faire une erreur. Mieux utiliser une constante globale: ###Code %%c #include <stdio.h> #define DIMENSION 10 void ligne() { for (int i=0;i<DIMENSION;++i) { printf("*"); } } int main() { for (int i=0;i<DIMENSION;++i) { ligne(); printf("\n"); } } ###Output _____no_output_____ ###Markdown Les macros sont remplacés textuellement avant compilation par le **préprocesseur C**. ###Code ###Output _____no_output_____
Aula02/.ipynb_checkpoints/python-tutorial-checkpoint.ipynb
###Markdown Tutorial de PythonTutorial de Python 3.6 Tipos de dadosEm python você não precisa declarar as variaveis e nem especificar o tipo dela. Uma mesma variável também pode receber dados de tipos diferentes. ###Code # Mesma variável recebendo tipos diferentes var = 5 print(var) var = "oi" print(var) var = 3.14 print(var) # transformando de inteiro para float print("Int -> float", float(5)) # transformando de float para inteiro print("Float -> int", int(3.1415)) # transformando de inteiro para string print("Int -> str", str(234)) ###Output Int -> float 5.0 Float -> int 3 Int -> str 234 ###Markdown Operações básicasPython suporta as mesmas operações aritiméticas de C com algumas a mais. ###Code x = 2 y = 3 # Adição print("Add:", y + x) # Subtração print("Sub:", y - x) # Multiplicação print("Mult:", y * x) # Divisão real print("Div:", y / x) # Divisão inteira print("Div int:", y // x) # Exponenciação print("Exp:", y ** x) # Não exite x++ x += 1 print("x++ ", x) ###Output Add: 5 Sub: 1 Mult: 6 Div: 1.5 Div int: 1 Exp: 9 x++ 3 ###Markdown Operações lógicasVendo como a sintaxe para lógica é diferente. Aqui há nativamente true e false ###Code # AND print(True and False) # OR print(True or False) # Variaveis print(var > y) # NOT print(not True) ###Output False True True False ###Markdown StringsAlgumas funções que já vem prontas para o tratamento de strings ###Code s = 'oi, tudo bem!' # Split print("Split em espaço:", s.split()) # Replace print("Replace na string:", s.replace('!', '?')) ###Output Split em espaço: ['oi,', 'tudo', 'bem!'] Replace na string: oi, tudo bem? ###Markdown ListasComo fazer arrays em Python e a principal diferença entre C ###Code # Os elementos de uma lista não precisam ser do mesmo tipo l = [1, 3.14, 5, 7, 8, 'eita', []] print(l) ###Output [1, 3.14, 5, 7, 8, 'eita', []] ###Markdown Indexando listasNão tem só uma forma de indexar ###Code l = [1, 3, 5, 7, 8, 'eita'] # Primeiro elemento print(l[0]) # Três primeiros elementos print(l[0:4]) # Último elemento print(l[-1]) # Do começo até o fim de dois em dois print(l[::2]) # Invertendo a lista print(l[::-1]) ###Output ['eita', 8, 7, 5, 3, 1] ###Markdown Aumentando as listas ###Code x = [1, 2, 3] y = [5, 6, 7] # Adicionando um elemento x.append(4) print(x) # Juntando duas listas print(x + y) # Multiplicando a lista print(x * 3) # Jeito padrão de fazer um vetor só de zeros em python zeros = [0] * 10 print(zeros) ###Output [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ###Markdown ForsSua cabeça pode explodir nessa seção ###Code # For de 0 a 4 printando cada elemento for i in range(5): print(i) # Lista que vai de 0 a 9 l = [] for i in range(10): l.append(i) print(l) ###Output [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ###Markdown Há uma grande diferença da forma como Python e C fazem uso do for ###Code # For sobre lista como em C for i in range(len(l)): print(l[i]) # For sobre lista python for x in l: print(x) ###Output 0 1 2 3 4 5 6 7 8 9 ###Markdown Pythonic way - List comprehension ###Code # Lista que vai de 0 a 9. Faz a mesma coisa que o código que gerou a lista l só que em uma linha [i for i in range(10)] # Lista de 0 a 9 só com pares [i for i in range(10) if i % 2 == 0] # For em duas listas ao mesmo tempo x = ['a', 'b', 'c'] y = [1, 2, 3] for letra, numero in zip(x, y): print(letra, numero) ###Output a 1 b 2 c 3 ###Markdown DicionáriosDicionários são tabela hash que armazenam uma chave e um valor ###Code a = {'oi': 5, 'tchau': 10} a ###Output _____no_output_____ ###Markdown É possível iterar sobre o dicionário da mesma forma como sobre as listas, só precisa escolher sobre o que ###Code # Keys print(a.keys()) # Values print(a.values()) # Items print(a.items()) ###Output dict_keys(['oi', 'tchau']) dict_values([5, 10]) dict_items([('oi', 5), ('tchau', 10)]) ###Markdown FunçãoComo sempre, é possível usar como se fosse em C mas o Python apresenta algumas funcionalidades a mais ###Code # Função normal def fun(a, b): return a * b fun(4, 5) # Função que retorna 2 elementos def fun2(a, b): return a * 2, b * 2 fun2(1, 2) # Função com argumento padrão def func3(a, b = 10): return a * b # Chamando normalmente print(func3(5, 2)) # Sem passar o parametro print(func3(5)) ###Output 10 50
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial2.ipynb
###Markdown Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom, Siddharth Suresh **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 1 hour, 35 minutes*In the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ ###Code # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nvuty/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Plotting Functions def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) # @title Helper Functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)` --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1CD4y1m7dK", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown This video explains how to model a network with interacting populations of excitatory and inhibitory neurons (the Wilson-Cowan model). It shows how to solve the network activity vs. time and introduces the phase plane in two dimensions. Section 1.1: Mathematical description of the WC model*Estimated timing to here from start of tutorial: 12 min* Click here for text recap of relevant part of video Many of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Coding Exercise 1.1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the helper function `F` with default parameter values. ###Code help(F) pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # raise NotImplementedError('student exercise: compute F-I curves of excitatory and inhibitory populations') ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Visualize plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_043dd600.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan model*Estimated timing to here from start of tutorial: 20 min*Once again, we can integrate our equations numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Coding Exercise 1.2: Numerically integrate the Wilson-Cowan equationsWe will implemenent this numerical simulation of our equations and visualize two simulations with similar initial points. ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Simulate first trajectory rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # Simulate second trajectory rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # Visualize my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_15eff812.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo 1.2: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. We change the initial activity of the excitatory population.What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_50331264.py) Think! 1.2It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysis*Estimated timing to here from start of tutorial: 45 min*Just like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. You have seen this before in the [pre-reqs calculus day](https://compneuro.neuromatch.io/tutorials/W0D4_Calculus/student/W0D4_Tutorial3.htmlsection-3-2-phase-plane-plot-and-nullcline) and on the [Linear Systems day](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial1.htmlsection-4-stream-plots)So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV15k4y1m7Kt", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Interactive Demo 2: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5d1fcb72.py) Section 2.1: Nullclines of the Wilson-Cowan Equations*Estimated timing to here from start of tutorial: 1 hour, 3 min*An important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Coding Exercise 2.1: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse # Set parameters pars = default_pars() x = np.linspace(1e-6, 1, 100) # Get inverse and visualize plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_f3500f59.py)*Example output:* Now you can compute the nullclines, using Equations 4-5 (repeated here for ease of access):\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}\begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align} ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE # Set parameters pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Compute nullclines Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # Visualize plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_db10856b.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector field*Estimated timing to here from start of tutorial: 1 hour, 20 min*How can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? Click here for text recap of relevant part of video The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Coding Exercise 2.2: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Create vector field using EIderivs plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesIn the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)`- Plotting utilities --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Section 1.1: Mathematical description of the WC modelMany of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Exercise 1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the function defined above with default parameter values. ###Code pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Uncomment when you fill the (...) # plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_b3a0ec15.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan modelEquation $1$ can be integrated numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Exercise 2: Numerically integrate the Wilson-Cowan equations ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Here are two trajectories with close intial values # Uncomment these lines to test your function # rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_af0bd722.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown Think!It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysisJust like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Interactive Demo: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_222c9db1.py) Section 2.1: Nullclines of the Wilson-Cowan EquationsAn important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Exercise 3: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse pars = default_pars() x = np.linspace(1e-6, 1, 100) # Uncomment the next line to test your function # plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_937a4040.py)*Example output:* Now you can compute the nullclines, using Equations 4-5: ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Uncomment these lines to test your functions # Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) # Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_2366ea57.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector fieldHow can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Exercise 4: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Uncomment below to test your function # plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5a629797.py)*Example output:* The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field- Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. Think! There are, in total, three intersection points, meaning that the system has three fixed points.- One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? - Why the arrows tend to get smaller as they approach the fixed points? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_3d37729b.py) --- SummaryCongratulations! You have finished the second day of the last week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons.In the last tutorial on dynamical neuronal networks you learned to:- Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model- Plot the frequency-current (F-I) curves for both populations- Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**.Do you have more time? Have you finished early? We have more fun material for you!Below are some, more advanced concepts on dynamical systems:- You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**.- You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations.If you need even more, there are two applications of the Wilson-Cowan model:- Visualization of an Inhibition-stabilized network- Simulation of working memory --- Bonus 1: Fixed points, stability analysis, and limit cycles in the Wilson-Cowan model ###Code # @title Video 3: Fixed points and their stability from IPython.display import YouTubeVideo video = YouTubeVideo(id="jIx26iQ69ps", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Fixed Points of the E/I systemClearly, the intersection points of the two nullcline curves are the fixed points of the Wilson-Cowan model in Equation $(1)$. In the next exercise, we will find the coordinate of all fixed points for a given set of parameters.We'll make use of two functions, similar to ones we saw in the previous tutorial, which use a root-finding algorithm to find the fixed points of the system with Excitatory and Inhibitory populations. ###Code # @markdown *Execute the cell to define `my_fp` and `check_fp`* def my_fp(pars, rE_init, rI_init): """ Use opt.root function to solve Equations (2)-(3) from initial values """ tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] tau_I, a_I, theta_I = pars['tau_I'], pars['a_I'], pars['theta_I'] wEE, wEI = pars['wEE'], pars['wEI'] wIE, wII = pars['wIE'], pars['wII'] I_ext_E, I_ext_I = pars['I_ext_E'], pars['I_ext_I'] # define the right hand of wilson-cowan equations def my_WCr(x): rE, rI = x drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I y = np.array([drEdt, drIdt]) return y x0 = np.array([rE_init, rI_init]) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fp(pars, x_fp, mytol=1e-6): """ Verify (drE/dt)^2 + (drI/dt)^2< mytol Args: pars : Parameter dictionary fp : value of fixed point mytol : tolerance, default as 10^{-6} Returns : Whether it is a correct fixed point: True/False """ drEdt, drIdt = EIderivs(x_fp[0], x_fp[1], **pars) return drEdt**2 + drIdt**2 < mytol ###Output _____no_output_____ ###Markdown Exercise 5: Find the fixed points of the Wilson-Cowan modelFrom the above nullclines, we notice that the system features three fixed points with the parameters we used. To find their coordinates, we need to choose proper initial value to give to the `opt.root` function inside of the function `my_fp` we just defined, since the algorithm can only find fixed points in the vicinity of the initial value. In this exercise, you will use the function `my_fp` to find each of the fixed points by varying the initial values. Note that you can choose the values near the intersections of the nullclines as the initial values to calculate the fixed points. ###Code pars = default_pars() ###################################################################### # TODO: Provide initial values to calculate the fixed points # Check if x_fp's are the correct with the function check_fp(x_fp) # Hint: vary different initial values to find the correct fixed points # ###################################################################### # my_plot_nullcline(pars) # Find the first fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1) # Find the second fixed point # x_fp_2 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_2): # plot_fp(x_fp_2) # Find the third fixed point # x_fp_3 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_3): # plot_fp(x_fp_3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_0dd7ba5a.py)*Example output:* Stability of a fixed point and eigenvalues of the Jacobian MatrixFirst, let's first rewrite the system $1$ as:\begin{align}&\frac{dr_E}{dt} = G_E(r_E,r_I)\\[0.5mm]&\frac{dr_I}{dt} = G_I(r_E,r_I)\end{align}where\begin{align}&G_E(r_E,r_I) = \frac{1}{\tau_E} [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a,\theta)]\\[1mm]&G_I(r_E,r_I) = \frac{1}{\tau_I} [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a,\theta)]\end{align}By definition, $\displaystyle\frac{dr_E}{dt}=0$ and $\displaystyle\frac{dr_I}{dt}=0$ at each fixed point. Therefore, if the initial state is exactly at the fixed point, the state of the system will not change as time evolves. However, if the initial state deviates slightly from the fixed point, there are two possibilitiesthe trajectory will be attracted back to the 1. The trajectory will be attracted back to the fixed point2. The trajectory will diverge from the fixed point. These two possibilities define the type of fixed point, i.e., stable or unstable. Similar to the 1D system studied in the previous tutorial, the stability of a fixed point $(r_E^*, r_I^*)$ can be determined by linearizing the dynamics of the system (can you figure out how?). The linearization will yield a matrix of first-order derivatives called the Jacobian matrix: \begin{equation} J= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial}{\partial r_E}}G_E(r_E^*, r_I^*) & \displaystyle{\frac{\partial}{\partial r_I}}G_E(r_E^*, r_I^*)\\[1mm] \displaystyle\frac{\partial}{\partial r_E} G_I(r_E^*, r_I^*) & \displaystyle\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) \\ \end{array} } \right] \quad (7)\end{equation}\\The eigenvalues of the Jacobian matrix calculated at the fixed point will determine whether it is a stable or unstable fixed point.\\We can now compute the derivatives needed to build the Jacobian matrix. Using the chain and product rules the derivatives for the excitatory population are given by:\\\begin{align}&\frac{\partial}{\partial r_E} G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)] \\[1mm]&\frac{\partial}{\partial r_I} G_E(r_E^*, r_I^*)= \frac{1}{\tau_E} [-w_{EI} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)]\end{align}\\The same applies to the inhibitory population. Exercise 6: Compute the Jacobian Matrix for the Wilson-Cowan modelHere, you can use `dF(x,a,theta)` defined in the `Helper functions` to calculate the derivative of the F-I curve. ###Code def get_eig_Jacobian(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Compute eigenvalues of the Wilson-Cowan Jacobian matrix at fixed point.""" # Initialization rE, rI = fp J = np.zeros((2, 2)) ########################################################################### # TODO for students: compute J and disable the error raise NotImplementedError("Student excercise: compute the Jacobian matrix") ########################################################################### # Compute the four elements of the Jacobian matrix J[0, 0] = ... J[0, 1] = ... J[1, 0] = ... J[1, 1] = ... # Compute and return the eigenvalues evals = np.linalg.eig(J)[0] return evals # Uncomment below to test your function when you get the correct fixed point # eig_1 = get_eig_Jacobian(x_fp_1, **pars) # eig_2 = get_eig_Jacobian(x_fp_2, **pars) # eig_3 = get_eig_Jacobian(x_fp_3, **pars) # print(eig_1, 'Stable point') # print(eig_2, 'Unstable point') # print(eig_3, 'Stable point') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_e83cfc05.py) As is evident, the stable fixed points correspond to the negative eigenvalues, while unstable point corresponds to at least one positive eigenvalue. The sign of the eigenvalues is determined by the connectivity (interaction) between excitatory and inhibitory populations. Below we investigate the effect of $w_{EE}$ on the nullclines and the eigenvalues of the dynamical system. \* _Critical change is referred to as **pitchfork bifurcation**_. Effect of `wEE` on the nullclines and the eigenvalues ###Code # @title # @markdown Make sure you execute this cell to see the plot! eig_1_M = [] eig_2_M = [] eig_3_M = [] pars = default_pars() wEE_grid = np.linspace(6, 10, 40) my_thre = 7.9 for wEE in wEE_grid: x_fp_1 = [0., 0.] x_fp_2 = [.4, .1] x_fp_3 = [.8, .1] pars['wEE'] = wEE if wEE < my_thre: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) else: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) x_fp_2 = my_fp(pars, x_fp_2[0], x_fp_2[1]) eig_2 = get_eig_Jacobian(x_fp_2, **pars) eig_2_M.append(np.max(np.real(eig_2))) x_fp_3 = my_fp(pars, x_fp_3[0], x_fp_3[1]) eig_3 = get_eig_Jacobian(x_fp_3, **pars) eig_3_M.append(np.max(np.real(eig_3))) eig_1_M = np.array(eig_1_M) eig_2_M = np.array(eig_2_M) eig_3_M = np.array(eig_3_M) plt.figure(figsize=(8, 5.5)) plt.plot(wEE_grid, eig_1_M, 'ko', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_2_M, 'bo', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_3_M, 'ro', alpha=0.5) plt.xlabel(r'$w_{\mathrm{EE}}$') plt.ylabel('maximum real part of eigenvalue') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Nullclines position in the phase plane changes with parameter valuesIn this interactive widget, we will explore how the nullclines move for different values of the parameter $w_{EE}$. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_nullcline_diffwEE(wEE): """ plot nullclines for different values of wEE """ pars = default_pars(wEE=wEE) # plot the E, I nullclines Exc_null_rE = np.linspace(-0.01, .96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, .8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12, 5.5)) plt.subplot(121) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.subplot(222) pars['rE_init'], pars['rI_init'] = 0.2, 0.2 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.title('E/I activity\nfor different initial conditions', fontweight='bold') plt.subplot(224) pars['rE_init'], pars['rI_init'] = 0.4, 0.1 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.tight_layout() plt.show() _ = widgets.interact(plot_nullcline_diffwEE, wEE=(6., 10., .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_d4eb0391.py) We can also investigate the effect of different $w_{EI}$, $w_{IE}$, $w_{II}$, $\tau_{E}$, $\tau_{I}$, and $I_{E}^{\text{ext}}$ on the stability of fixed points. In addition, we can also consider the perturbation of the parameters of the gain curve $F(\cdot)$. Limit cycle - OscillationsFor some values of interaction terms ($w_{EE}, w_{IE}, w_{EI}, w_{II}$ the eigenvalues can become complex. When at least one pair of eigenvalues is complex, oscillations arise. The stability of oscillations is determined by the real part of the eigenvalues (+ve real part oscillations will grow, -ve real part oscillations will die out). The size of the complex part determines the frequency of oscillations. For instance, if we use a different set of parameters, $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, and $I_{E}^{\text{ext}}=0.8$, then we shall observe that the E and I population activity start to oscillate! Please execute the cell below to check the oscillatory behavior. ###Code # @title # @markdown Make sure you execute this cell to see the oscillations! pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Exercise 7: Plot the phase planeWe can also understand the oscillations of the population behavior using the phase plane. By plotting a set of trajectories with different initial states, we can see that these trajectories will move in a circle instead of converging to a fixed point. This circle is called "limit cycle" and shows the periodic oscillations of the $E$ and $I$ population behavior under some conditions.Try to plot the phase plane using the previously defined functions. ###Code pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 plt.figure(figsize=(7, 5.5)) my_plot_nullcline(pars) ############################################################################### # TODO for students: plot phase plane: nullclines, trajectories, fixed point # ############################################################################### # Find the correct fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1, position=(0, 0), rotation=40) my_plot_trajectories(pars, 0.2, 3, 'Sample trajectories \nwith different initial values') my_plot_vector(pars) plt.legend(loc=[1.01, 0.7]) plt.xlim(-0.05, 1.01) plt.ylim(-0.05, 0.65) plt.show() ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_03c5c8dd.py)*Example output:* Interactive Demo: Limit cycle and oscillations.From the above examples, the change of model parameters changes the shape of the nullclines and, accordingly, the behavior of the $E$ and $I$ populations from steady fixed points to oscillations. However, the shape of the nullclines is unable to fully determine the behavior of the network. The vector field also matters. To demonstrate this, here, we will investigate the effect of time constants on the population behavior. By changing the inhibitory time constant $\tau_I$, the nullclines do not change, but the network behavior changes substantially from steady state to oscillations with different frequencies. Such a dramatic change in the system behavior is referred to as a **bifurcation**. \\Please execute the code below to check this out. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def time_constant_effect(tau_i=0.5): pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = tau_i Exc_null_rE = np.linspace(0.0, .9, 100) Inh_null_rI = np.linspace(0.0, .6, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12.5, 5.5)) plt.subplot(121) # nullclines plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline', zorder=2) plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline', zorder=2) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') # fixed point x_fp_1 = my_fp(pars, 0.5, 0.5) plt.plot(x_fp_1[0], x_fp_1[1], 'ko', zorder=2) eig_1 = get_eig_Jacobian(x_fp_1, **pars) # trajectories for ie in range(5): for ii in range(5): pars['rE_init'], pars['rI_init'] = 0.1 * ie, 0.1 * ii rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, 'k', alpha=0.3, zorder=1) # vector field EI_grid_E = np.linspace(0., 1.0, 20) EI_grid_I = np.linspace(0., 0.6, 20) rE, rI = np.meshgrid(EI_grid_E, EI_grid_I) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=10, facecolor='c') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.subplot(122) # sample E/I trajectories pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.legend(loc='best') plt.tight_layout() plt.show() _ = widgets.interact(time_constant_effect, tau_i=(0.2, 3, .1)) ###Output _____no_output_____ ###Markdown Both $\tau_E$ and $\tau_I$ feature in the Jacobian of the two population network (eq 7). So here is seems that the by increasing $\tau_I$ the eigenvalues corresponding to the stable fixed point are becoming complex.Intuitively, when $\tau_I$ is smaller, inhibitory activity changes faster than excitatory activity. As inhibition exceeds above a certain value, high inhibition inhibits excitatory population but that in turns means that inhibitory population gets smaller input (from the exc. connection). So inhibition decreases rapidly. But this means that excitation recovers -- and so on ... --- Bonus 2: Inhibition-stabilized network (ISN)As described above, one can obtain the linear approximation around the fixed point as \begin{equation} \frac{d}{dr} \vec{R}= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial G_E}{\partial r_E}} & \displaystyle{\frac{\partial G_E}{\partial r_I}}\\[1mm] \displaystyle\frac{\partial G_I}{\partial r_E} & \displaystyle\frac{\partial G_I}{\partial r_I} \\ \end{array} } \right] \vec{R},\end{equation}\\where $\vec{R} = [r_E, r_I]^{\rm T}$ is the vector of the E/I activity.Let's direct our attention to the excitatory subpopulation which follows:\\\begin{equation}\frac{dr_E}{dt} = \frac{\partial G_E}{\partial r_E}\cdot r_E + \frac{\partial G_E}{\partial r_I} \cdot r_I\end{equation}\\Recall that, around fixed point $(r_E^*, r_I^*)$:\\\begin{align}&\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (8)\\[1mm]&\frac{\partial}{\partial r_I}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-w_{EI} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (9)\\[1mm]&\frac{\partial}{\partial r_E}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [w_{IE} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (10)\\[1mm]&\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [-1-w_{II} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (11)\end{align} \\From Equation. (8), it is clear that $\displaystyle{\frac{\partial G_E}{\partial r_I}}$ is negative since the $\displaystyle{\frac{dF}{dx}}$ is always positive. It can be understood by that the recurrent inhibition from the inhibitory activity ($I$) can reduce the excitatory ($E$) activity. However, as described above, $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ has negative terms related to the "leak" effect, and positive term related to the recurrent excitation. Therefore, it leads to two different regimes:- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}>0$, **inhibition-stabilizednetwork (ISN) regime** Exercise 8: Compute $\displaystyle{\frac{\partial G_E}{\partial r_E}}$Implemet the function to calculate the $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ for the default parameters, and the parameters of the limit cycle case. ###Code def get_dGdE(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Simulate the Wilson-Cowan equations Args: fp : fixed point (E, I), array Other arguments are parameters of the Wilson-Cowan model Returns: J : the 2x2 Jacobian matrix """ rE, rI = fp ########################################################################## # TODO for students: compute dGdrE and disable the error raise NotImplementedError("Student excercise: compute the dG/dE, Eq. (13)") ########################################################################## # Calculate the J[0,0] dGdrE = ... return dGdrE # Uncomment below to test your function pars = default_pars() x_fp_1 = my_fp(pars, 0.1, 0.1) x_fp_2 = my_fp(pars, 0.3, 0.3) x_fp_3 = my_fp(pars, 0.8, 0.6) # dGdrE1 = get_dGdE(x_fp_1, **pars) # dGdrE2 = get_dGdE(x_fp_2, **pars) # dGdrE3 = get_dGdE(x_fp_3, **pars) print(f'For the default case:') # print(f'dG/drE(fp1) = {dGdrE1:.3f}') # print(f'dG/drE(fp2) = {dGdrE2:.3f}') # print(f'dG/drE(fp3) = {dGdrE3:.3f}') print('\n') pars = default_pars(wEE=6.4, wEI=4.8, wIE=6.0, wII=1.2, I_ext_E=0.8) x_fp_lc = my_fp(pars, 0.8, 0.8) # dGdrE_lc = get_dGdE(x_fp_lc, **pars) print('For the limit cycle case:') # print(f'dG/drE(fp_lc) = {dGdrE_lc:.3f}') ###Output _____no_output_____ ###Markdown **SAMPLE OUTPUT**```For the default case:dG/drE(fp1) = -0.650dG/drE(fp2) = 1.519dG/drE(fp3) = -0.706For the limit cycle case:dG/drE(fp_lc) = 0.837``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_1ff7a08c.py) Nullcline analysis of the ISNRecall that the E nullcline follows\\\begin{align}r_E = F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E). \end{align}\\That is, the firing rate $r_E$ can be a function of $r_I$. Let's take the derivative of $r_E$ over $r_I$, and obtain\\\begin{align}&\frac{dr_E}{dr_I} = F_E' \cdot (w_{EE}\frac{dr_E}{dr_I} -w_{EI}) \iff \\&(1-F_E'w_{EE})\frac{dr_E}{dr_I} = -F_E' w_{EI} \iff \\&\frac{dr_E}{dr_I} = \frac{F_E' w_{EI}}{F_E'w_{EE}-1}.\end{align}\\That is, in the phase plane `rI-rE`-plane, we can obtain the slope along the E nullcline as\\$$\frac{dr_I}{dr_E} = \frac{F_E'w_{EE}-1}{F_E' w_{EI}} \qquad (12)$$Similarly, we can obtain the slope along the I nullcline as \\$$\frac{dr_I}{dr_E} = \frac{F_I'w_{IE}}{F_I' w_{II}+1} \qquad (13)$$\\Then, we can find that $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline} >0$ in Equation (13).\\However, in Equation (12), the sign of $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}$ depends on the sign of $(F_E'w_{EE}-1)$. Note that, $(F_E'w_{EE}-1)$ is the same as what we show above (Equation (8)). Therefore, we can have the following results:- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}>0$, **inhibition-stabilizednetwork (ISN) regime**\\In addition, it is important to point out the following two conclusions: \\**Conclusion 1:** The stability of a fixed point can determine the relationship between the slopes Equations (12) and (13). As discussed above, the fixed point is stable when the Jacobian matrix ($J$ in Equation (7)) has two eigenvalues with a negative real part, which indicates a positive determinant of $J$, i.e., $\text{det}(J)>0$.From the Jacobian matrix definition and from Equations (8-11), we can obtain:$ J= \left[ {\begin{array}{cc} \displaystyle{\frac{1}{\tau_E}(w_{EE}F_E'-1)} & \displaystyle{-\frac{1}{\tau_E}w_{EI}F_E'}\\[1mm] \displaystyle {\frac{1}{\tau_I}w_{IE}F_I'}& \displaystyle {\frac{1}{\tau_I}(-w_{II}F_I'-1)} \\ \end{array} } \right] $\\Note that, if we let \\$ T= \left[ {\begin{array}{cc} \displaystyle{\tau_E} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle \tau_I \\ \end{array} } \right] $, $ F= \left[ {\begin{array}{cc} \displaystyle{F_E'} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle F_I' \\ \end{array} } \right] $, and $ W= \left[ {\begin{array}{cc} \displaystyle{w_{EE}} & \displaystyle{-w_{EI}}\\[1mm] \displaystyle w_{IE}& \displaystyle -w_{II} \\ \end{array} } \right] $\\then, using matrix notation, $J=T^{-1}(F W - I)$ where $I$ is the identity matrix, i.e., $I = \begin{bmatrix} 1 & 0 \\0 & 1 \end{bmatrix}.$ \\Therefore, $\det{(J)}=\det{(T^{-1}(F W - I))}=(\det{(T^{-1})})(\det{(F W - I)}).$Since $\det{(T^{-1})}>0$, as time constants are positive by definition, the sign of $\det{(J)}$ is the same as the sign of $\det{(F W - I)}$, and so$$\det{(FW - I)} = (F_E' w_{EI})(F_I'w_{IE}) - (F_I' w_{II} + 1)(F_E'w_{EE} - 1) > 0.$$\\Then, combining this with Equations (12) and (13), we can obtain$$\frac{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline}}{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}} > 1. $$Therefore, at the stable fixed point, I nullcline has a steeper slope than the E nullcline. **Conclusion 2:** Effect of adding input to the inhibitory population.While adding the input $\delta I^{\rm ext}_I$ into the inhibitory population, we can find that the E nullcline (Equation (5)) stays the same, while the I nullcline has a pure left shift: the original I nullcline equation,\\\begin{equation}r_I = F_I(w_{IE}r_E-w_{II}r_I + I^{\text{ext}}_I ; \alpha_I, \theta_I)\end{equation}\\remains true if we take $I^{\text{ext}}_I \rightarrow I^{\text{ext}}_I +\delta I^{\rm ext}_I$ and $r_E\rightarrow r_E'=r_E-\frac{\delta I^{\rm ext}_I}{w_{IE}}$ to obtain\\\begin{equation}r_I = F_I(w_{IE}r_E'-w_{II}r_I + I^{\text{ext}}_I +\delta I^{\rm ext}_I; \alpha_I, \theta_I)\end{equation}\\Putting these points together, we obtain the phase plane pictures shown below. After adding input to the inhibitory population, it can be seen in the trajectories above and the phase plane below that, in an **ISN**, $r_I$ will increase first but then decay to the new fixed point in which both $r_I$ and $r_E$ are decreased compared to the original fixed point. However, by adding $\delta I^{\rm ext}_I$ into a **non-ISN**, $r_I$ will increase while $r_E$ will decrease. Interactive Demo: Nullclines of Example **ISN** and **non-ISN**In this interactive widget, we inject excitatory ($I^{\text{ext}}_I>0$) or inhibitory ($I^{\text{ext}}_I<0$) drive into the inhibitory population when the system is at its equilibrium (with parameters $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, $I_{E}^{\text{ext}}=0.8$, $\tau_I = 0.8$, and $I^{\text{ext}}_I=0$). How does the firing rate of the $I$ population changes with excitatory vs inhibitory drive into the inhibitory population? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=50., dt=0.1) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = 0.8 def ISN_I_perturb(dI=0.1): Lt = len(pars['range_t']) pars['I_ext_I'] = np.zeros(Lt) pars['I_ext_I'][int(Lt / 2):] = dI pars['rE_init'], pars['rI_init'] = 0.6, 0.26 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 1.5)) plt.plot(pars['range_t'], pars['I_ext_I'], 'k') plt.xlabel('t (ms)') plt.ylabel(r'$I_I^{\mathrm{ext}}$') plt.ylim(pars['I_ext_I'].min() - 0.01, pars['I_ext_I'].max() + 0.01) plt.show() plt.figure(figsize=(8, 4.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rE[int(Lt / 2) - 1] * np.ones(Lt), 'b--') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'], rI[int(Lt / 2) - 1] * np.ones(Lt), 'r--') plt.ylim(0, 0.8) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(ISN_I_perturb, dI=(-0.2, 0.21, .05)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_cec4906e.py) --- Bonus 3: Fixed point and working memory The input into the neurons measured in the experiment is often very noisy ([links](http://www.scholarpedia.org/article/Stochastic_dynamical_systems)). Here, the noisy synaptic input current is modeled as an Ornstein-Uhlenbeck (OU)process, which has been discussed several times in the previous tutorials. ###Code # @markdown Make sure you execute this cell to enable the function my_OU and plot the input current! def my_OU(pars, sig, myseed=False): """ Expects: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt-1): I_ou[it+1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars(T=50) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show() ###Output _____no_output_____ ###Markdown With the default parameters, the system fluctuates around a resting state with the noisy input. ###Code # @markdown Execute this cell to plot activity with noisy input current pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=20201) pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=20202) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Short pulse induced persistent activityThen, let's use a brief 10-ms positive current to the E population when the system is at its equilibrium. When this amplitude (SE below) is sufficiently large, a persistent activity is produced that outlasts the transient input. What is the firing rate of the persistent activity, and what is the critical input strength? Try to understand the phenomena from the above phase-plane analysis. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def my_inject(pars, t_start, t_lag=10.): """ Expects: pars : parameter dictionary t_start : pulse starts [ms] t_lag : pulse lasts [ms] Returns: I : extra pulse time """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize I = np.zeros(Lt) # pulse timing N_start = int(t_start / dt) N_lag = int(t_lag / dt) I[N_start:N_start + N_lag] = 1. return I pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=2021) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 # pulse I_pulse = my_inject(pars, t_start=20., t_lag=10.) L_pulse = sum(I_pulse > 0.) def WC_with_pulse(SE=0.): pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=2022) pars['I_ext_E'] += SE * I_pulse rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.plot(pars['range_t'][I_pulse > 0.], 1.0*np.ones(L_pulse), 'r', lw=3.) ax.text(25, 1.05, 'stimulus on', horizontalalignment='center', verticalalignment='bottom') ax.set_ylim(-0.03, 1.2) ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() _ = widgets.interact(WC_with_pulse, SE=(0.0, 1.0, .05)) ###Output _____no_output_____ ###Markdown Neuromatch Academy: Week 2, Day 4, Tutorial 2 Neuronal Network Dynamics: Wilson-Cowan Model__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesIn the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)`- Plotting utilities --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Section 1.1: Mathematical description of the WC modelMany of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Exercise 1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the function defined above with default parameter values. ###Code pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Uncomment when you fill the (...) # plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_b3a0ec15.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan modelEquation $1$ can be integrated numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Exercise 2: Numerically integrate the Wilson-Cowan equations ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Here are two trajectories with close intial values # Uncomment these lines to test your function # rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_af0bd722.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown Think!It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysisJust like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Interactive Demo: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_222c9db1.py) Section 2.1: Nullclines of the Wilson-Cowan EquationsAn important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Exercise 3: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse pars = default_pars() x = np.linspace(1e-6, 1, 100) # Uncomment the next line to test your function # plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_937a4040.py)*Example output:* Now you can compute the nullclines, using Equations 4-5: ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Uncomment these lines to test your functions # Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) # Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_2366ea57.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector fieldHow can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Exercise 4: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Uncomment below to test your function # plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5a629797.py)*Example output:* The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field- Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. Think! There are, in total, three intersection points, meaning that the system has three fixed points.- One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? - Why the arrows tend to get smaller as they approach the fixed points? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_3d37729b.py) --- SummaryCongratulations! You have finished the second day of the last week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons.In the last tutorial on dynamical neuronal networks you learned to:- Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model- Plot the frequency-current (F-I) curves for both populations- Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**.Do you have more time? Have you finished early? We have more fun material for you!Below are some, more advanced concepts on dynamical systems:- You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**.- You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations.If you need even more, there are two applications of the Wilson-Cowan model:- Visualization of an Inhibition-stabilized network- Simulation of working memory --- Bonus 1: Fixed points, stability analysis, and limit cycles in the Wilson-Cowan model ###Code # @title Video 3: Fixed points and their stability from IPython.display import YouTubeVideo video = YouTubeVideo(id="jIx26iQ69ps", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Fixed Points of the E/I systemClearly, the intersection points of the two nullcline curves are the fixed points of the Wilson-Cowan model in Equation $(1)$. In the next exercise, we will find the coordinate of all fixed points for a given set of parameters.We'll make use of two functions, similar to ones we saw in the previous tutorial, which use a root-finding algorithm to find the fixed points of the system with Excitatory and Inhibitory populations. ###Code # @markdown *Execute the cell to define `my_fp` and `check_fp`* def my_fp(pars, rE_init, rI_init): """ Use opt.root function to solve Equations (2)-(3) from initial values """ tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] tau_I, a_I, theta_I = pars['tau_I'], pars['a_I'], pars['theta_I'] wEE, wEI = pars['wEE'], pars['wEI'] wIE, wII = pars['wIE'], pars['wII'] I_ext_E, I_ext_I = pars['I_ext_E'], pars['I_ext_I'] # define the right hand of wilson-cowan equations def my_WCr(x): rE, rI = x drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I y = np.array([drEdt, drIdt]) return y x0 = np.array([rE_init, rI_init]) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fp(pars, x_fp, mytol=1e-6): """ Verify (drE/dt)^2 + (drI/dt)^2< mytol Args: pars : Parameter dictionary fp : value of fixed point mytol : tolerance, default as 10^{-6} Returns : Whether it is a correct fixed point: True/False """ drEdt, drIdt = EIderivs(x_fp[0], x_fp[1], **pars) return drEdt**2 + drIdt**2 < mytol ###Output _____no_output_____ ###Markdown Exercise 5: Find the fixed points of the Wilson-Cowan modelFrom the above nullclines, we notice that the system features three fixed points with the parameters we used. To find their coordinates, we need to choose proper initial value to give to the `opt.root` function inside of the function `my_fp` we just defined, since the algorithm can only find fixed points in the vicinity of the initial value. In this exercise, you will use the function `my_fp` to find each of the fixed points by varying the initial values. Note that you can choose the values near the intersections of the nullclines as the initial values to calculate the fixed points. ###Code pars = default_pars() ###################################################################### # TODO: Provide initial values to calculate the fixed points # Check if x_fp's are the correct with the function check_fp(x_fp) # Hint: vary different initial values to find the correct fixed points # ###################################################################### # my_plot_nullcline(pars) # Find the first fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1) # Find the second fixed point # x_fp_2 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_2): # plot_fp(x_fp_2) # Find the third fixed point # x_fp_3 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_3): # plot_fp(x_fp_3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_0dd7ba5a.py)*Example output:* Stability of a fixed point and eigenvalues of the Jacobian MatrixFirst, let's first rewrite the system $1$ as:\begin{align}&\frac{dr_E}{dt} = G_E(r_E,r_I)\\[0.5mm]&\frac{dr_I}{dt} = G_I(r_E,r_I)\end{align}where\begin{align}&G_E(r_E,r_I) = \frac{1}{\tau_E} [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a,\theta)]\\[1mm]&G_I(r_E,r_I) = \frac{1}{\tau_I} [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a,\theta)]\end{align}By definition, $\displaystyle\frac{dr_E}{dt}=0$ and $\displaystyle\frac{dr_I}{dt}=0$ at each fixed point. Therefore, if the initial state is exactly at the fixed point, the state of the system will not change as time evolves. However, if the initial state deviates slightly from the fixed point, there are two possibilitiesthe trajectory will be attracted back to the 1. The trajectory will be attracted back to the fixed point2. The trajectory will diverge from the fixed point. These two possibilities define the type of fixed point, i.e., stable or unstable. Similar to the 1D system studied in the previous tutorial, the stability of a fixed point $(r_E^*, r_I^*)$ can be determined by linearizing the dynamics of the system (can you figure out how?). The linearization will yield a matrix of first-order derivatives called the Jacobian matrix: \begin{equation} J= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial}{\partial r_E}}G_E(r_E^*, r_I^*) & \displaystyle{\frac{\partial}{\partial r_I}}G_E(r_E^*, r_I^*)\\[1mm] \displaystyle\frac{\partial}{\partial r_E} G_I(r_E^*, r_I^*) & \displaystyle\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) \\ \end{array} } \right] \quad (7)\end{equation}\\The eigenvalues of the Jacobian matrix calculated at the fixed point will determine whether it is a stable or unstable fixed point.\\We can now compute the derivatives needed to build the Jacobian matrix. Using the chain and product rules the derivatives for the excitatory population are given by:\\\begin{align}&\frac{\partial}{\partial r_E} G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)] \\[1mm]&\frac{\partial}{\partial r_I} G_E(r_E^*, r_I^*)= \frac{1}{\tau_E} [-w_{EI} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)]\end{align}\\The same applies to the inhibitory population. Exercise 6: Compute the Jacobian Matrix for the Wilson-Cowan modelHere, you can use `dF(x,a,theta)` defined in the `Helper functions` to calculate the derivative of the F-I curve. ###Code def get_eig_Jacobian(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Compute eigenvalues of the Wilson-Cowan Jacobian matrix at fixed point.""" # Initialization rE, rI = fp J = np.zeros((2, 2)) ########################################################################### # TODO for students: compute J and disable the error raise NotImplementedError("Student excercise: compute the Jacobian matrix") ########################################################################### # Compute the four elements of the Jacobian matrix J[0, 0] = ... J[0, 1] = ... J[1, 0] = ... J[1, 1] = ... # Compute and return the eigenvalues evals = np.linalg.eig(J)[0] return evals # Uncomment below to test your function when you get the correct fixed point # eig_1 = get_eig_Jacobian(x_fp_1, **pars) # eig_2 = get_eig_Jacobian(x_fp_2, **pars) # eig_3 = get_eig_Jacobian(x_fp_3, **pars) # print(eig_1, 'Stable point') # print(eig_2, 'Unstable point') # print(eig_3, 'Stable point') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_e83cfc05.py) As is evident, the stable fixed points correspond to the negative eigenvalues, while unstable point corresponds to at least one positive eigenvalue. The sign of the eigenvalues is determined by the connectivity (interaction) between excitatory and inhibitory populations. Below we investigate the effect of $w_{EE}$ on the nullclines and the eigenvalues of the dynamical system. \* _Critical change is referred to as **pitchfork bifurcation**_. Effect of `wEE` on the nullclines and the eigenvalues ###Code # @title # @markdown Make sure you execute this cell to see the plot! eig_1_M = [] eig_2_M = [] eig_3_M = [] pars = default_pars() wEE_grid = np.linspace(6, 10, 40) my_thre = 7.9 for wEE in wEE_grid: x_fp_1 = [0., 0.] x_fp_2 = [.4, .1] x_fp_3 = [.8, .1] pars['wEE'] = wEE if wEE < my_thre: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) else: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) x_fp_2 = my_fp(pars, x_fp_2[0], x_fp_2[1]) eig_2 = get_eig_Jacobian(x_fp_2, **pars) eig_2_M.append(np.max(np.real(eig_2))) x_fp_3 = my_fp(pars, x_fp_3[0], x_fp_3[1]) eig_3 = get_eig_Jacobian(x_fp_3, **pars) eig_3_M.append(np.max(np.real(eig_3))) eig_1_M = np.array(eig_1_M) eig_2_M = np.array(eig_2_M) eig_3_M = np.array(eig_3_M) plt.figure(figsize=(8, 5.5)) plt.plot(wEE_grid, eig_1_M, 'ko', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_2_M, 'bo', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_3_M, 'ro', alpha=0.5) plt.xlabel(r'$w_{\mathrm{EE}}$') plt.ylabel('maximum real part of eigenvalue') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Nullclines position in the phase plane changes with parameter valuesIn this interactive widget, we will explore how the nullclines move for different values of the parameter $w_{EE}$. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_nullcline_diffwEE(wEE): """ plot nullclines for different values of wEE """ pars = default_pars(wEE=wEE) # plot the E, I nullclines Exc_null_rE = np.linspace(-0.01, .96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, .8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12, 5.5)) plt.subplot(121) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.subplot(222) pars['rE_init'], pars['rI_init'] = 0.2, 0.2 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.title('E/I activity\nfor different initial conditions', fontweight='bold') plt.subplot(224) pars['rE_init'], pars['rI_init'] = 0.4, 0.1 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.tight_layout() plt.show() _ = widgets.interact(plot_nullcline_diffwEE, wEE=(6., 10., .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_d4eb0391.py) We can also investigate the effect of different $w_{EI}$, $w_{IE}$, $w_{II}$, $\tau_{E}$, $\tau_{I}$, and $I_{E}^{\text{ext}}$ on the stability of fixed points. In addition, we can also consider the perturbation of the parameters of the gain curve $F(\cdot)$. Limit cycle - OscillationsFor some values of interaction terms ($w_{EE}, w_{IE}, w_{EI}, w_{II}$ the eigenvalues can become complex. When at least one pair of eigenvalues is complex, oscillations arise. The stability of oscillations is determined by the real part of the eigenvalues (+ve real part oscillations will grow, -ve real part oscillations will die out). The size of the complex part determines the frequency of oscillations. For instance, if we use a different set of parameters, $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, and $I_{E}^{\text{ext}}=0.8$, then we shall observe that the E and I population activity start to oscillate! Please execute the cell below to check the oscillatory behavior. ###Code # @title # @markdown Make sure you execute this cell to see the oscillations! pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Exercise 7: Plot the phase planeWe can also understand the oscillations of the population behavior using the phase plane. By plotting a set of trajectories with different initial states, we can see that these trajectories will move in a circle instead of converging to a fixed point. This circle is called "limit cycle" and shows the periodic oscillations of the $E$ and $I$ population behavior under some conditions.Try to plot the phase plane using the previously defined functions. ###Code pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 plt.figure(figsize=(7, 5.5)) my_plot_nullcline(pars) ############################################################################### # TODO for students: plot phase plane: nullclines, trajectories, fixed point # ############################################################################### # Find the correct fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1, position=(0, 0), rotation=40) my_plot_trajectories(pars, 0.2, 3, 'Sample trajectories \nwith different initial values') my_plot_vector(pars) plt.legend(loc=[1.01, 0.7]) plt.xlim(-0.05, 1.01) plt.ylim(-0.05, 0.65) plt.show() ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_03c5c8dd.py)*Example output:* Interactive Demo: Limit cycle and oscillations.From the above examples, the change of model parameters changes the shape of the nullclines and, accordingly, the behavior of the $E$ and $I$ populations from steady fixed points to oscillations. However, the shape of the nullclines is unable to fully determine the behavior of the network. The vector field also matters. To demonstrate this, here, we will investigate the effect of time constants on the population behavior. By changing the inhibitory time constant $\tau_I$, the nullclines do not change, but the network behavior changes substantially from steady state to oscillations with different frequencies. Such a dramatic change in the system behavior is referred to as a **bifurcation**. \\Please execute the code below to check this out. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def time_constant_effect(tau_i=0.5): pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = tau_i Exc_null_rE = np.linspace(0.0, .9, 100) Inh_null_rI = np.linspace(0.0, .6, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12.5, 5.5)) plt.subplot(121) # nullclines plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline', zorder=2) plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline', zorder=2) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') # fixed point x_fp_1 = my_fp(pars, 0.5, 0.5) plt.plot(x_fp_1[0], x_fp_1[1], 'ko', zorder=2) eig_1 = get_eig_Jacobian(x_fp_1, **pars) # trajectories for ie in range(5): for ii in range(5): pars['rE_init'], pars['rI_init'] = 0.1 * ie, 0.1 * ii rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, 'k', alpha=0.3, zorder=1) # vector field EI_grid_E = np.linspace(0., 1.0, 20) EI_grid_I = np.linspace(0., 0.6, 20) rE, rI = np.meshgrid(EI_grid_E, EI_grid_I) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=10, facecolor='c') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.subplot(122) # sample E/I trajectories pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.legend(loc='best') plt.tight_layout() plt.show() _ = widgets.interact(time_constant_effect, tau_i=(0.2, 3, .1)) ###Output _____no_output_____ ###Markdown Both $\tau_E$ and $\tau_I$ feature in the Jacobian of the two population network (eq 7). So here is seems that the by increasing $\tau_I$ the eigenvalues corresponding to the stable fixed point are becoming complex.Intuitively, when $\tau_I$ is smaller, inhibitory activity changes faster than excitatory activity. As inhibition exceeds above a certain value, high inhibition inhibits excitatory population but that in turns means that inhibitory population gets smaller input (from the exc. connection). So inhibition decreases rapidly. But this means that excitation recovers -- and so on ... --- Bonus 2: Inhibition-stabilized network (ISN)As described above, one can obtain the linear approximation around the fixed point as \begin{equation} \frac{d}{dr} \vec{R}= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial G_E}{\partial r_E}} & \displaystyle{\frac{\partial G_E}{\partial r_I}}\\[1mm] \displaystyle\frac{\partial G_I}{\partial r_E} & \displaystyle\frac{\partial G_I}{\partial r_I} \\ \end{array} } \right] \vec{R},\end{equation}\\where $\vec{R} = [r_E, r_I]^{\rm T}$ is the vector of the E/I activity.Let's direct our attention to the excitatory subpopulation which follows:\\\begin{equation}\frac{dr_E}{dt} = \frac{\partial G_E}{\partial r_E}\cdot r_E + \frac{\partial G_E}{\partial r_I} \cdot r_I\end{equation}\\Recall that, around fixed point $(r_E^*, r_I^*)$:\\\begin{align}&\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (8)\\[1mm]&\frac{\partial}{\partial r_I}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-w_{EI} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (9)\\[1mm]&\frac{\partial}{\partial r_E}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [w_{IE} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (10)\\[1mm]&\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [-1-w_{II} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (11)\end{align} \\From Equation. (8), it is clear that $\displaystyle{\frac{\partial G_E}{\partial r_I}}$ is negative since the $\displaystyle{\frac{dF}{dx}}$ is always positive. It can be understood by that the recurrent inhibition from the inhibitory activity ($I$) can reduce the excitatory ($E$) activity. However, as described above, $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ has negative terms related to the "leak" effect, and positive term related to the recurrent excitation. Therefore, it leads to two different regimes:- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}>0$, **inhibition-stabilizednetwork (ISN) regime** Exercise 8: Compute $\displaystyle{\frac{\partial G_E}{\partial r_E}}$Implemet the function to calculate the $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ for the default parameters, and the parameters of the limit cycle case. ###Code def get_dGdE(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Simulate the Wilson-Cowan equations Args: fp : fixed point (E, I), array Other arguments are parameters of the Wilson-Cowan model Returns: J : the 2x2 Jacobian matrix """ rE, rI = fp ########################################################################## # TODO for students: compute dGdrE and disable the error raise NotImplementedError("Student excercise: compute the dG/dE, Eq. (13)") ########################################################################## # Calculate the J[0,0] dGdrE = ... return dGdrE # Uncomment below to test your function pars = default_pars() x_fp_1 = my_fp(pars, 0.1, 0.1) x_fp_2 = my_fp(pars, 0.3, 0.3) x_fp_3 = my_fp(pars, 0.8, 0.6) # dGdrE1 = get_dGdE(x_fp_1, **pars) # dGdrE2 = get_dGdE(x_fp_2, **pars) # dGdrE3 = get_dGdE(x_fp_3, **pars) print(f'For the default case:') # print(f'dG/drE(fp1) = {dGdrE1:.3f}') # print(f'dG/drE(fp2) = {dGdrE2:.3f}') # print(f'dG/drE(fp3) = {dGdrE3:.3f}') print('\n') pars = default_pars(wEE=6.4, wEI=4.8, wIE=6.0, wII=1.2, I_ext_E=0.8) x_fp_lc = my_fp(pars, 0.8, 0.8) # dGdrE_lc = get_dGdE(x_fp_lc, **pars) print('For the limit cycle case:') # print(f'dG/drE(fp_lc) = {dGdrE_lc:.3f}') ###Output _____no_output_____ ###Markdown **SAMPLE OUTPUT**```For the default case:dG/drE(fp1) = -0.650dG/drE(fp2) = 1.519dG/drE(fp3) = -0.706For the limit cycle case:dG/drE(fp_lc) = 0.837``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_1ff7a08c.py) Nullcline analysis of the ISNRecall that the E nullcline follows\\\begin{align}r_E = F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E). \end{align}\\That is, the firing rate $r_E$ can be a function of $r_I$. Let's take the derivative of $r_E$ over $r_I$, and obtain\\\begin{align}&\frac{dr_E}{dr_I} = F_E' \cdot (w_{EE}\frac{dr_E}{dr_I} -w_{EI}) \iff \\&(1-F_E'w_{EE})\frac{dr_E}{dr_I} = -F_E' w_{EI} \iff \\&\frac{dr_E}{dr_I} = \frac{F_E' w_{EI}}{F_E'w_{EE}-1}.\end{align}\\That is, in the phase plane `rI-rE`-plane, we can obtain the slope along the E nullcline as\\$$\frac{dr_I}{dr_E} = \frac{F_E'w_{EE}-1}{F_E' w_{EI}} \qquad (12)$$Similarly, we can obtain the slope along the I nullcline as \\$$\frac{dr_I}{dr_E} = \frac{F_I'w_{IE}}{F_I' w_{II}+1} \qquad (13)$$\\Then, we can find that $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline} >0$ in Equation (13).\\However, in Equation (12), the sign of $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}$ depends on the sign of $(F_E'w_{EE}-1)$. Note that, $(F_E'w_{EE}-1)$ is the same as what we show above (Equation (8)). Therefore, we can have the following results:- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}>0$, **inhibition-stabilizednetwork (ISN) regime**\\In addition, it is important to point out the following two conclusions: \\**Conclusion 1:** The stability of a fixed point can determine the relationship between the slopes Equations (12) and (13). As discussed above, the fixed point is stable when the Jacobian matrix ($J$ in Equation (7)) has two eigenvalues with a negative real part, which indicates a positive determinant of $J$, i.e., $\text{det}(J)>0$.From the Jacobian matrix definition and from Equations (8-11), we can obtain:$ J= \left[ {\begin{array}{cc} \displaystyle{\frac{1}{\tau_E}(w_{EE}F_E'-1)} & \displaystyle{-\frac{1}{\tau_E}w_{EI}F_E'}\\[1mm] \displaystyle {\frac{1}{\tau_I}w_{IE}F_I'}& \displaystyle {\frac{1}{\tau_I}(-w_{II}F_I'-1)} \\ \end{array} } \right] $\\Note that, if we let \\$ T= \left[ {\begin{array}{cc} \displaystyle{\tau_E} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle \tau_I \\ \end{array} } \right] $, $ F= \left[ {\begin{array}{cc} \displaystyle{F_E'} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle F_I' \\ \end{array} } \right] $, and $ W= \left[ {\begin{array}{cc} \displaystyle{w_{EE}} & \displaystyle{-w_{EI}}\\[1mm] \displaystyle w_{IE}& \displaystyle -w_{II} \\ \end{array} } \right] $\\then, using matrix notation, $J=T^{-1}(F W - I)$ where $I$ is the identity matrix, i.e., $I = \begin{bmatrix} 1 & 0 \\0 & 1 \end{bmatrix}.$ \\Therefore, $\det{(J)}=\det{(T^{-1}(F W - I))}=(\det{(T^{-1})})(\det{(F W - I)}).$Since $\det{(T^{-1})}>0$, as time constants are positive by definition, the sign of $\det{(J)}$ is the same as the sign of $\det{(F W - I)}$, and so$$\det{(FW - I)} = (F_E' w_{EI})(F_I'w_{IE}) - (F_I' w_{II} + 1)(F_E'w_{EE} - 1) > 0.$$\\Then, combining this with Equations (12) and (13), we can obtain$$\frac{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline}}{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}} > 1. $$Therefore, at the stable fixed point, I nullcline has a steeper slope than the E nullcline. **Conclusion 2:** Effect of adding input to the inhibitory population.While adding the input $\delta I^{\rm ext}_I$ into the inhibitory population, we can find that the E nullcline (Equation (5)) stays the same, while the I nullcline has a pure left shift: the original I nullcline equation,\\\begin{equation}r_I = F_I(w_{IE}r_E-w_{II}r_I + I^{\text{ext}}_I ; \alpha_I, \theta_I)\end{equation}\\remains true if we take $I^{\text{ext}}_I \rightarrow I^{\text{ext}}_I +\delta I^{\rm ext}_I$ and $r_E\rightarrow r_E'=r_E-\frac{\delta I^{\rm ext}_I}{w_{IE}}$ to obtain\\\begin{equation}r_I = F_I(w_{IE}r_E'-w_{II}r_I + I^{\text{ext}}_I +\delta I^{\rm ext}_I; \alpha_I, \theta_I)\end{equation}\\Putting these points together, we obtain the phase plane pictures shown below. After adding input to the inhibitory population, it can be seen in the trajectories above and the phase plane below that, in an **ISN**, $r_I$ will increase first but then decay to the new fixed point in which both $r_I$ and $r_E$ are decreased compared to the original fixed point. However, by adding $\delta I^{\rm ext}_I$ into a **non-ISN**, $r_I$ will increase while $r_E$ will decrease. Interactive Demo: Nullclines of Example **ISN** and **non-ISN**In this interactive widget, we inject excitatory ($I^{\text{ext}}_I>0$) or inhibitory ($I^{\text{ext}}_I<0$) drive into the inhibitory population when the system is at its equilibrium (with parameters $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, $I_{E}^{\text{ext}}=0.8$, $\tau_I = 0.8$, and $I^{\text{ext}}_I=0$). How does the firing rate of the $I$ population changes with excitatory vs inhibitory drive into the inhibitory population? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=50., dt=0.1) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = 0.8 def ISN_I_perturb(dI=0.1): Lt = len(pars['range_t']) pars['I_ext_I'] = np.zeros(Lt) pars['I_ext_I'][int(Lt / 2):] = dI pars['rE_init'], pars['rI_init'] = 0.6, 0.26 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 1.5)) plt.plot(pars['range_t'], pars['I_ext_I'], 'k') plt.xlabel('t (ms)') plt.ylabel(r'$I_I^{\mathrm{ext}}$') plt.ylim(pars['I_ext_I'].min() - 0.01, pars['I_ext_I'].max() + 0.01) plt.show() plt.figure(figsize=(8, 4.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rE[int(Lt / 2) - 1] * np.ones(Lt), 'b--') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'], rI[int(Lt / 2) - 1] * np.ones(Lt), 'r--') plt.ylim(0, 0.8) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(ISN_I_perturb, dI=(-0.2, 0.21, .05)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_cec4906e.py) --- Bonus 3: Fixed point and working memory The input into the neurons measured in the experiment is often very noisy ([links](http://www.scholarpedia.org/article/Stochastic_dynamical_systems)). Here, the noisy synaptic input current is modeled as an Ornstein-Uhlenbeck (OU)process, which has been discussed several times in the previous tutorials. ###Code # @markdown Make sure you execute this cell to enable the function my_OU and plot the input current! def my_OU(pars, sig, myseed=False): """ Expects: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt-1): I_ou[it+1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars(T=50) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show() ###Output _____no_output_____ ###Markdown With the default parameters, the system fluctuates around a resting state with the noisy input. ###Code # @markdown Execute this cell to plot activity with noisy input current pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=20201) pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=20202) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Short pulse induced persistent activityThen, let's use a brief 10-ms positive current to the E population when the system is at its equilibrium. When this amplitude (SE below) is sufficiently large, a persistent activity is produced that outlasts the transient input. What is the firing rate of the persistent activity, and what is the critical input strength? Try to understand the phenomena from the above phase-plane analysis. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def my_inject(pars, t_start, t_lag=10.): """ Expects: pars : parameter dictionary t_start : pulse starts [ms] t_lag : pulse lasts [ms] Returns: I : extra pulse time """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize I = np.zeros(Lt) # pulse timing N_start = int(t_start / dt) N_lag = int(t_lag / dt) I[N_start:N_start + N_lag] = 1. return I pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=2021) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 # pulse I_pulse = my_inject(pars, t_start=20., t_lag=10.) L_pulse = sum(I_pulse > 0.) def WC_with_pulse(SE=0.): pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=2022) pars['I_ext_E'] += SE * I_pulse rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.plot(pars['range_t'][I_pulse > 0.], 1.0*np.ones(L_pulse), 'r', lw=3.) ax.text(25, 1.05, 'stimulus on', horizontalalignment='center', verticalalignment='bottom') ax.set_ylim(-0.03, 1.2) ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() _ = widgets.interact(WC_with_pulse, SE=(0.0, 1.0, .05)) ###Output _____no_output_____ ###Markdown Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom, Siddharth Suresh **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 1 hour, 35 minutes*In the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ ###Code # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nvuty/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Plotting Functions def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) # @title Helper Functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)` --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1CD4y1m7dK", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown This video explains how to model a network with interacting populations of excitatory and inhibitory neurons (the Wilson-Cowan model). It shows how to solve the network activity vs. time and introduces the phase plane in two dimensions. Section 1.1: Mathematical description of the WC model*Estimated timing to here from start of tutorial: 12 min* Click here for text recap of relevant part of video Many of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Coding Exercise 1.1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the helper function `F` with default parameter values. ###Code help(F) pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # #raise NotImplementedError('student exercise: compute F-I curves of excitatory and inhibitory populations') ################################################################### # Compute the F-I curve of the excitatory population FI_exc = F(x, pars['a_I'], pars['theta_I']) # Compute the F-I curve of the inhibitory population FI_inh = F(x, pars['a_E'], pars['theta_E']) # Visualize plot_FI_EI(x, FI_exc, FI_inh) ###Output 1.2 2.8 1.0 4.0 ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_043dd600.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan model*Estimated timing to here from start of tutorial: 20 min*Once again, we can integrate our equations numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Coding Exercise 1.2: Numerically integrate the Wilson-Cowan equationsWe will implemenent this numerical simulation of our equations and visualize two simulations with similar initial points. ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error #raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = dt / tau_E * (-rE[k] + F(wEE * rE[k] - wEI * rI[k] + I_ext_E[k], a_E, theta_E)) # Calculate the derivative of the I population drI = dt / tau_I * (-rI[k] + F(-wII * rI[k] + wIE * rE[k] + I_ext_I[k], a_I, theta_I)) # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Simulate first trajectory rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # Simulate second trajectory rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # Visualize my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_15eff812.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo 1.2: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. We change the initial activity of the excitatory population.What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_50331264.py) Think! 1.2It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysis*Estimated timing to here from start of tutorial: 45 min*Just like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. You have seen this before in the [pre-reqs calculus day](https://compneuro.neuromatch.io/tutorials/W0D4_Calculus/student/W0D4_Tutorial3.htmlsection-3-2-phase-plane-plot-and-nullcline) and on the [Linear Systems day](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial1.htmlsection-4-stream-plots)So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV15k4y1m7Kt", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Interactive Demo 2: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5d1fcb72.py) Section 2.1: Nullclines of the Wilson-Cowan Equations*Estimated timing to here from start of tutorial: 1 hour, 3 min*An important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Coding Exercise 2.1: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse #raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = -1/a *np.log(1/(x+1/(1+np.exp(a*theta)))-1)+theta return F_inverse # Set parameters pars = default_pars() x = np.linspace(1e-6, 1, 100) # Get inverse and visualize plot_FI_inverse(x, a=1, theta=3) ###Output /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:18: RuntimeWarning: invalid value encountered in log ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_f3500f59.py)*Example output:* Now you can compute the nullclines, using Equations 4-5 (repeated here for ease of access):\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}\begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align} ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error #raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = 1/wEI*(wEE*rE-F_inv(rE, a_E, theta_E)+I_ext_E) return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error #raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = 1/wIE*(wII*rI+F_inv(rI, a_I, theta_I)-I_ext_I) return rE # Set parameters pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Compute nullclines Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # Visualize plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_db10856b.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector field*Estimated timing to here from start of tutorial: 1 hour, 20 min*How can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? Click here for text recap of relevant part of video The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Coding Exercise 2.2: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error #raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E # Compute the derivative of rI drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I return drEdt, drIdt # Create vector field using EIderivs plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesIn the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)`- Plotting utilities --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Section 1.1: Mathematical description of the WC modelMany of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Exercise 1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the function defined above with default parameter values. ###Code pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Uncomment when you fill the (...) # plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_b3a0ec15.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan modelEquation $1$ can be integrated numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Exercise 2: Numerically integrate the Wilson-Cowan equations ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Here are two trajectories with close intial values # Uncomment these lines to test your function # rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_af0bd722.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown Think!It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysisJust like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Interactive Demo: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_222c9db1.py) Section 2.1: Nullclines of the Wilson-Cowan EquationsAn important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Exercise 3: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse pars = default_pars() x = np.linspace(1e-6, 1, 100) # Uncomment the next line to test your function # plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_937a4040.py)*Example output:* Now you can compute the nullclines, using Equations 4-5: ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Uncomment these lines to test your functions # Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) # Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_2366ea57.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector fieldHow can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Exercise 4: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Uncomment below to test your function # plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5a629797.py)*Example output:* The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field- Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. Think! There are, in total, three intersection points, meaning that the system has three fixed points.- One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? - Why the arrows tend to get smaller as they approach the fixed points? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_3d37729b.py) --- SummaryCongratulations! You have finished the second day of the last week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons.In the last tutorial on dynamical neuronal networks you learned to:- Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model- Plot the frequency-current (F-I) curves for both populations- Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**.Do you have more time? Have you finished early? We have more fun material for you!Below are some, more advanced concepts on dynamical systems:- You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**.- You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations.If you need even more, there are two applications of the Wilson-Cowan model:- Visualization of an Inhibition-stabilized network- Simulation of working memory --- Bonus 1: Fixed points, stability analysis, and limit cycles in the Wilson-Cowan model ###Code # @title Video 3: Fixed points and their stability from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="jIx26iQ69ps", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Fixed Points of the E/I systemClearly, the intersection points of the two nullcline curves are the fixed points of the Wilson-Cowan model in Equation $(1)$. In the next exercise, we will find the coordinate of all fixed points for a given set of parameters.We'll make use of two functions, similar to ones we saw in the previous tutorial, which use a root-finding algorithm to find the fixed points of the system with Excitatory and Inhibitory populations. ###Code # @markdown *Execute the cell to define `my_fp` and `check_fp`* def my_fp(pars, rE_init, rI_init): """ Use opt.root function to solve Equations (2)-(3) from initial values """ tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] tau_I, a_I, theta_I = pars['tau_I'], pars['a_I'], pars['theta_I'] wEE, wEI = pars['wEE'], pars['wEI'] wIE, wII = pars['wIE'], pars['wII'] I_ext_E, I_ext_I = pars['I_ext_E'], pars['I_ext_I'] # define the right hand of wilson-cowan equations def my_WCr(x): rE, rI = x drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I y = np.array([drEdt, drIdt]) return y x0 = np.array([rE_init, rI_init]) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fp(pars, x_fp, mytol=1e-6): """ Verify (drE/dt)^2 + (drI/dt)^2< mytol Args: pars : Parameter dictionary fp : value of fixed point mytol : tolerance, default as 10^{-6} Returns : Whether it is a correct fixed point: True/False """ drEdt, drIdt = EIderivs(x_fp[0], x_fp[1], **pars) return drEdt**2 + drIdt**2 < mytol ###Output _____no_output_____ ###Markdown Exercise 5: Find the fixed points of the Wilson-Cowan modelFrom the above nullclines, we notice that the system features three fixed points with the parameters we used. To find their coordinates, we need to choose proper initial value to give to the `opt.root` function inside of the function `my_fp` we just defined, since the algorithm can only find fixed points in the vicinity of the initial value. In this exercise, you will use the function `my_fp` to find each of the fixed points by varying the initial values. Note that you can choose the values near the intersections of the nullclines as the initial values to calculate the fixed points. ###Code pars = default_pars() ###################################################################### # TODO: Provide initial values to calculate the fixed points # Check if x_fp's are the correct with the function check_fp(x_fp) # Hint: vary different initial values to find the correct fixed points # ###################################################################### # my_plot_nullcline(pars) # Find the first fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1) # Find the second fixed point # x_fp_2 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_2): # plot_fp(x_fp_2) # Find the third fixed point # x_fp_3 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_3): # plot_fp(x_fp_3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_0dd7ba5a.py)*Example output:* Stability of a fixed point and eigenvalues of the Jacobian MatrixFirst, let's first rewrite the system $1$ as:\begin{align}&\frac{dr_E}{dt} = G_E(r_E,r_I)\\[0.5mm]&\frac{dr_I}{dt} = G_I(r_E,r_I)\end{align}where\begin{align}&G_E(r_E,r_I) = \frac{1}{\tau_E} [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a,\theta)]\\[1mm]&G_I(r_E,r_I) = \frac{1}{\tau_I} [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a,\theta)]\end{align}By definition, $\displaystyle\frac{dr_E}{dt}=0$ and $\displaystyle\frac{dr_I}{dt}=0$ at each fixed point. Therefore, if the initial state is exactly at the fixed point, the state of the system will not change as time evolves. However, if the initial state deviates slightly from the fixed point, there are two possibilitiesthe trajectory will be attracted back to the 1. The trajectory will be attracted back to the fixed point2. The trajectory will diverge from the fixed point. These two possibilities define the type of fixed point, i.e., stable or unstable. Similar to the 1D system studied in the previous tutorial, the stability of a fixed point $(r_E^*, r_I^*)$ can be determined by linearizing the dynamics of the system (can you figure out how?). The linearization will yield a matrix of first-order derivatives called the Jacobian matrix: \begin{equation} J= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial}{\partial r_E}}G_E(r_E^*, r_I^*) & \displaystyle{\frac{\partial}{\partial r_I}}G_E(r_E^*, r_I^*)\\[1mm] \displaystyle\frac{\partial}{\partial r_E} G_I(r_E^*, r_I^*) & \displaystyle\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) \\ \end{array} } \right] \quad (7)\end{equation}\\The eigenvalues of the Jacobian matrix calculated at the fixed point will determine whether it is a stable or unstable fixed point.\\We can now compute the derivatives needed to build the Jacobian matrix. Using the chain and product rules the derivatives for the excitatory population are given by:\\\begin{align}&\frac{\partial}{\partial r_E} G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)] \\[1mm]&\frac{\partial}{\partial r_I} G_E(r_E^*, r_I^*)= \frac{1}{\tau_E} [-w_{EI} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)]\end{align}\\The same applies to the inhibitory population. Exercise 6: Compute the Jacobian Matrix for the Wilson-Cowan modelHere, you can use `dF(x,a,theta)` defined in the `Helper functions` to calculate the derivative of the F-I curve. ###Code def get_eig_Jacobian(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Compute eigenvalues of the Wilson-Cowan Jacobian matrix at fixed point.""" # Initialization rE, rI = fp J = np.zeros((2, 2)) ########################################################################### # TODO for students: compute J and disable the error raise NotImplementedError("Student excercise: compute the Jacobian matrix") ########################################################################### # Compute the four elements of the Jacobian matrix J[0, 0] = ... J[0, 1] = ... J[1, 0] = ... J[1, 1] = ... # Compute and return the eigenvalues evals = np.linalg.eig(J)[0] return evals # Uncomment below to test your function when you get the correct fixed point # eig_1 = get_eig_Jacobian(x_fp_1, **pars) # eig_2 = get_eig_Jacobian(x_fp_2, **pars) # eig_3 = get_eig_Jacobian(x_fp_3, **pars) # print(eig_1, 'Stable point') # print(eig_2, 'Unstable point') # print(eig_3, 'Stable point') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_e83cfc05.py) As is evident, the stable fixed points correspond to the negative eigenvalues, while unstable point corresponds to at least one positive eigenvalue. The sign of the eigenvalues is determined by the connectivity (interaction) between excitatory and inhibitory populations. Below we investigate the effect of $w_{EE}$ on the nullclines and the eigenvalues of the dynamical system. \* _Critical change is referred to as **pitchfork bifurcation**_. Effect of `wEE` on the nullclines and the eigenvalues ###Code # @title # @markdown Make sure you execute this cell to see the plot! eig_1_M = [] eig_2_M = [] eig_3_M = [] pars = default_pars() wEE_grid = np.linspace(6, 10, 40) my_thre = 7.9 for wEE in wEE_grid: x_fp_1 = [0., 0.] x_fp_2 = [.4, .1] x_fp_3 = [.8, .1] pars['wEE'] = wEE if wEE < my_thre: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) else: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) x_fp_2 = my_fp(pars, x_fp_2[0], x_fp_2[1]) eig_2 = get_eig_Jacobian(x_fp_2, **pars) eig_2_M.append(np.max(np.real(eig_2))) x_fp_3 = my_fp(pars, x_fp_3[0], x_fp_3[1]) eig_3 = get_eig_Jacobian(x_fp_3, **pars) eig_3_M.append(np.max(np.real(eig_3))) eig_1_M = np.array(eig_1_M) eig_2_M = np.array(eig_2_M) eig_3_M = np.array(eig_3_M) plt.figure(figsize=(8, 5.5)) plt.plot(wEE_grid, eig_1_M, 'ko', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_2_M, 'bo', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_3_M, 'ro', alpha=0.5) plt.xlabel(r'$w_{\mathrm{EE}}$') plt.ylabel('maximum real part of eigenvalue') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Nullclines position in the phase plane changes with parameter valuesIn this interactive widget, we will explore how the nullclines move for different values of the parameter $w_{EE}$. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_nullcline_diffwEE(wEE): """ plot nullclines for different values of wEE """ pars = default_pars(wEE=wEE) # plot the E, I nullclines Exc_null_rE = np.linspace(-0.01, .96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, .8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12, 5.5)) plt.subplot(121) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.subplot(222) pars['rE_init'], pars['rI_init'] = 0.2, 0.2 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.title('E/I activity\nfor different initial conditions', fontweight='bold') plt.subplot(224) pars['rE_init'], pars['rI_init'] = 0.4, 0.1 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.tight_layout() plt.show() _ = widgets.interact(plot_nullcline_diffwEE, wEE=(6., 10., .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_d4eb0391.py) We can also investigate the effect of different $w_{EI}$, $w_{IE}$, $w_{II}$, $\tau_{E}$, $\tau_{I}$, and $I_{E}^{\text{ext}}$ on the stability of fixed points. In addition, we can also consider the perturbation of the parameters of the gain curve $F(\cdot)$. Limit cycle - OscillationsFor some values of interaction terms ($w_{EE}, w_{IE}, w_{EI}, w_{II}$ the eigenvalues can become complex. When at least one pair of eigenvalues is complex, oscillations arise. The stability of oscillations is determined by the real part of the eigenvalues (+ve real part oscillations will grow, -ve real part oscillations will die out). The size of the complex part determines the frequency of oscillations. For instance, if we use a different set of parameters, $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, and $I_{E}^{\text{ext}}=0.8$, then we shall observe that the E and I population activity start to oscillate! Please execute the cell below to check the oscillatory behavior. ###Code # @title # @markdown Make sure you execute this cell to see the oscillations! pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Exercise 7: Plot the phase planeWe can also understand the oscillations of the population behavior using the phase plane. By plotting a set of trajectories with different initial states, we can see that these trajectories will move in a circle instead of converging to a fixed point. This circle is called "limit cycle" and shows the periodic oscillations of the $E$ and $I$ population behavior under some conditions.Try to plot the phase plane using the previously defined functions. ###Code pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 plt.figure(figsize=(7, 5.5)) my_plot_nullcline(pars) ############################################################################### # TODO for students: plot phase plane: nullclines, trajectories, fixed point # ############################################################################### # Find the correct fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1, position=(0, 0), rotation=40) my_plot_trajectories(pars, 0.2, 3, 'Sample trajectories \nwith different initial values') my_plot_vector(pars) plt.legend(loc=[1.01, 0.7]) plt.xlim(-0.05, 1.01) plt.ylim(-0.05, 0.65) plt.show() ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_03c5c8dd.py)*Example output:* Interactive Demo: Limit cycle and oscillations.From the above examples, the change of model parameters changes the shape of the nullclines and, accordingly, the behavior of the $E$ and $I$ populations from steady fixed points to oscillations. However, the shape of the nullclines is unable to fully determine the behavior of the network. The vector field also matters. To demonstrate this, here, we will investigate the effect of time constants on the population behavior. By changing the inhibitory time constant $\tau_I$, the nullclines do not change, but the network behavior changes substantially from steady state to oscillations with different frequencies. Such a dramatic change in the system behavior is referred to as a **bifurcation**. \\Please execute the code below to check this out. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def time_constant_effect(tau_i=0.5): pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = tau_i Exc_null_rE = np.linspace(0.0, .9, 100) Inh_null_rI = np.linspace(0.0, .6, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12.5, 5.5)) plt.subplot(121) # nullclines plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline', zorder=2) plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline', zorder=2) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') # fixed point x_fp_1 = my_fp(pars, 0.5, 0.5) plt.plot(x_fp_1[0], x_fp_1[1], 'ko', zorder=2) eig_1 = get_eig_Jacobian(x_fp_1, **pars) # trajectories for ie in range(5): for ii in range(5): pars['rE_init'], pars['rI_init'] = 0.1 * ie, 0.1 * ii rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, 'k', alpha=0.3, zorder=1) # vector field EI_grid_E = np.linspace(0., 1.0, 20) EI_grid_I = np.linspace(0., 0.6, 20) rE, rI = np.meshgrid(EI_grid_E, EI_grid_I) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=10, facecolor='c') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.subplot(122) # sample E/I trajectories pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.legend(loc='best') plt.tight_layout() plt.show() _ = widgets.interact(time_constant_effect, tau_i=(0.2, 3, .1)) ###Output _____no_output_____ ###Markdown Both $\tau_E$ and $\tau_I$ feature in the Jacobian of the two population network (eq 7). So here is seems that the by increasing $\tau_I$ the eigenvalues corresponding to the stable fixed point are becoming complex.Intuitively, when $\tau_I$ is smaller, inhibitory activity changes faster than excitatory activity. As inhibition exceeds above a certain value, high inhibition inhibits excitatory population but that in turns means that inhibitory population gets smaller input (from the exc. connection). So inhibition decreases rapidly. But this means that excitation recovers -- and so on ... --- Bonus 2: Inhibition-stabilized network (ISN)As described above, one can obtain the linear approximation around the fixed point as \begin{equation} \frac{d}{dr} \vec{R}= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial G_E}{\partial r_E}} & \displaystyle{\frac{\partial G_E}{\partial r_I}}\\[1mm] \displaystyle\frac{\partial G_I}{\partial r_E} & \displaystyle\frac{\partial G_I}{\partial r_I} \\ \end{array} } \right] \vec{R},\end{equation}\\where $\vec{R} = [r_E, r_I]^{\rm T}$ is the vector of the E/I activity.Let's direct our attention to the excitatory subpopulation which follows:\\\begin{equation}\frac{dr_E}{dt} = \frac{\partial G_E}{\partial r_E}\cdot r_E + \frac{\partial G_E}{\partial r_I} \cdot r_I\end{equation}\\Recall that, around fixed point $(r_E^*, r_I^*)$:\\\begin{align}&\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (8)\\[1mm]&\frac{\partial}{\partial r_I}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-w_{EI} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (9)\\[1mm]&\frac{\partial}{\partial r_E}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [w_{IE} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (10)\\[1mm]&\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [-1-w_{II} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (11)\end{align} \\From Equation. (8), it is clear that $\displaystyle{\frac{\partial G_E}{\partial r_I}}$ is negative since the $\displaystyle{\frac{dF}{dx}}$ is always positive. It can be understood by that the recurrent inhibition from the inhibitory activity ($I$) can reduce the excitatory ($E$) activity. However, as described above, $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ has negative terms related to the "leak" effect, and positive term related to the recurrent excitation. Therefore, it leads to two different regimes:- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}>0$, **inhibition-stabilizednetwork (ISN) regime** Exercise 8: Compute $\displaystyle{\frac{\partial G_E}{\partial r_E}}$Implemet the function to calculate the $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ for the default parameters, and the parameters of the limit cycle case. ###Code def get_dGdE(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Simulate the Wilson-Cowan equations Args: fp : fixed point (E, I), array Other arguments are parameters of the Wilson-Cowan model Returns: J : the 2x2 Jacobian matrix """ rE, rI = fp ########################################################################## # TODO for students: compute dGdrE and disable the error raise NotImplementedError("Student excercise: compute the dG/dE, Eq. (13)") ########################################################################## # Calculate the J[0,0] dGdrE = ... return dGdrE # Uncomment below to test your function pars = default_pars() x_fp_1 = my_fp(pars, 0.1, 0.1) x_fp_2 = my_fp(pars, 0.3, 0.3) x_fp_3 = my_fp(pars, 0.8, 0.6) # dGdrE1 = get_dGdE(x_fp_1, **pars) # dGdrE2 = get_dGdE(x_fp_2, **pars) # dGdrE3 = get_dGdE(x_fp_3, **pars) print(f'For the default case:') # print(f'dG/drE(fp1) = {dGdrE1:.3f}') # print(f'dG/drE(fp2) = {dGdrE2:.3f}') # print(f'dG/drE(fp3) = {dGdrE3:.3f}') print('\n') pars = default_pars(wEE=6.4, wEI=4.8, wIE=6.0, wII=1.2, I_ext_E=0.8) x_fp_lc = my_fp(pars, 0.8, 0.8) # dGdrE_lc = get_dGdE(x_fp_lc, **pars) print('For the limit cycle case:') # print(f'dG/drE(fp_lc) = {dGdrE_lc:.3f}') ###Output _____no_output_____ ###Markdown **SAMPLE OUTPUT**```For the default case:dG/drE(fp1) = -0.650dG/drE(fp2) = 1.519dG/drE(fp3) = -0.706For the limit cycle case:dG/drE(fp_lc) = 0.837``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_1ff7a08c.py) Nullcline analysis of the ISNRecall that the E nullcline follows\\\begin{align}r_E = F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E). \end{align}\\That is, the firing rate $r_E$ can be a function of $r_I$. Let's take the derivative of $r_E$ over $r_I$, and obtain\\\begin{align}&\frac{dr_E}{dr_I} = F_E' \cdot (w_{EE}\frac{dr_E}{dr_I} -w_{EI}) \iff \\&(1-F_E'w_{EE})\frac{dr_E}{dr_I} = -F_E' w_{EI} \iff \\&\frac{dr_E}{dr_I} = \frac{F_E' w_{EI}}{F_E'w_{EE}-1}.\end{align}\\That is, in the phase plane `rI-rE`-plane, we can obtain the slope along the E nullcline as\\$$\frac{dr_I}{dr_E} = \frac{F_E'w_{EE}-1}{F_E' w_{EI}} \qquad (12)$$Similarly, we can obtain the slope along the I nullcline as \\$$\frac{dr_I}{dr_E} = \frac{F_I'w_{IE}}{F_I' w_{II}+1} \qquad (13)$$\\Then, we can find that $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline} >0$ in Equation (13).\\However, in Equation (12), the sign of $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}$ depends on the sign of $(F_E'w_{EE}-1)$. Note that, $(F_E'w_{EE}-1)$ is the same as what we show above (Equation (8)). Therefore, we can have the following results:- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}>0$, **inhibition-stabilizednetwork (ISN) regime**\\In addition, it is important to point out the following two conclusions: \\**Conclusion 1:** The stability of a fixed point can determine the relationship between the slopes Equations (12) and (13). As discussed above, the fixed point is stable when the Jacobian matrix ($J$ in Equation (7)) has two eigenvalues with a negative real part, which indicates a positive determinant of $J$, i.e., $\text{det}(J)>0$.From the Jacobian matrix definition and from Equations (8-11), we can obtain:$ J= \left[ {\begin{array}{cc} \displaystyle{\frac{1}{\tau_E}(w_{EE}F_E'-1)} & \displaystyle{-\frac{1}{\tau_E}w_{EI}F_E'}\\[1mm] \displaystyle {\frac{1}{\tau_I}w_{IE}F_I'}& \displaystyle {\frac{1}{\tau_I}(-w_{II}F_I'-1)} \\ \end{array} } \right] $\\Note that, if we let \\$ T= \left[ {\begin{array}{cc} \displaystyle{\tau_E} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle \tau_I \\ \end{array} } \right] $, $ F= \left[ {\begin{array}{cc} \displaystyle{F_E'} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle F_I' \\ \end{array} } \right] $, and $ W= \left[ {\begin{array}{cc} \displaystyle{w_{EE}} & \displaystyle{-w_{EI}}\\[1mm] \displaystyle w_{IE}& \displaystyle -w_{II} \\ \end{array} } \right] $\\then, using matrix notation, $J=T^{-1}(F W - I)$ where $I$ is the identity matrix, i.e., $I = \begin{bmatrix} 1 & 0 \\0 & 1 \end{bmatrix}.$ \\Therefore, $\det{(J)}=\det{(T^{-1}(F W - I))}=(\det{(T^{-1})})(\det{(F W - I)}).$Since $\det{(T^{-1})}>0$, as time constants are positive by definition, the sign of $\det{(J)}$ is the same as the sign of $\det{(F W - I)}$, and so$$\det{(FW - I)} = (F_E' w_{EI})(F_I'w_{IE}) - (F_I' w_{II} + 1)(F_E'w_{EE} - 1) > 0.$$\\Then, combining this with Equations (12) and (13), we can obtain$$\frac{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline}}{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}} > 1. $$Therefore, at the stable fixed point, I nullcline has a steeper slope than the E nullcline. **Conclusion 2:** Effect of adding input to the inhibitory population.While adding the input $\delta I^{\rm ext}_I$ into the inhibitory population, we can find that the E nullcline (Equation (5)) stays the same, while the I nullcline has a pure left shift: the original I nullcline equation,\\\begin{equation}r_I = F_I(w_{IE}r_E-w_{II}r_I + I^{\text{ext}}_I ; \alpha_I, \theta_I)\end{equation}\\remains true if we take $I^{\text{ext}}_I \rightarrow I^{\text{ext}}_I +\delta I^{\rm ext}_I$ and $r_E\rightarrow r_E'=r_E-\frac{\delta I^{\rm ext}_I}{w_{IE}}$ to obtain\\\begin{equation}r_I = F_I(w_{IE}r_E'-w_{II}r_I + I^{\text{ext}}_I +\delta I^{\rm ext}_I; \alpha_I, \theta_I)\end{equation}\\Putting these points together, we obtain the phase plane pictures shown below. After adding input to the inhibitory population, it can be seen in the trajectories above and the phase plane below that, in an **ISN**, $r_I$ will increase first but then decay to the new fixed point in which both $r_I$ and $r_E$ are decreased compared to the original fixed point. However, by adding $\delta I^{\rm ext}_I$ into a **non-ISN**, $r_I$ will increase while $r_E$ will decrease. Interactive Demo: Nullclines of Example **ISN** and **non-ISN**In this interactive widget, we inject excitatory ($I^{\text{ext}}_I>0$) or inhibitory ($I^{\text{ext}}_I<0$) drive into the inhibitory population when the system is at its equilibrium (with parameters $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, $I_{E}^{\text{ext}}=0.8$, $\tau_I = 0.8$, and $I^{\text{ext}}_I=0$). How does the firing rate of the $I$ population changes with excitatory vs inhibitory drive into the inhibitory population? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=50., dt=0.1) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = 0.8 def ISN_I_perturb(dI=0.1): Lt = len(pars['range_t']) pars['I_ext_I'] = np.zeros(Lt) pars['I_ext_I'][int(Lt / 2):] = dI pars['rE_init'], pars['rI_init'] = 0.6, 0.26 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 1.5)) plt.plot(pars['range_t'], pars['I_ext_I'], 'k') plt.xlabel('t (ms)') plt.ylabel(r'$I_I^{\mathrm{ext}}$') plt.ylim(pars['I_ext_I'].min() - 0.01, pars['I_ext_I'].max() + 0.01) plt.show() plt.figure(figsize=(8, 4.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rE[int(Lt / 2) - 1] * np.ones(Lt), 'b--') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'], rI[int(Lt / 2) - 1] * np.ones(Lt), 'r--') plt.ylim(0, 0.8) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(ISN_I_perturb, dI=(-0.2, 0.21, .05)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_cec4906e.py) --- Bonus 3: Fixed point and working memory The input into the neurons measured in the experiment is often very noisy ([links](http://www.scholarpedia.org/article/Stochastic_dynamical_systems)). Here, the noisy synaptic input current is modeled as an Ornstein-Uhlenbeck (OU)process, which has been discussed several times in the previous tutorials. ###Code # @markdown Make sure you execute this cell to enable the function my_OU and plot the input current! def my_OU(pars, sig, myseed=False): """ Expects: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt-1): I_ou[it+1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars(T=50) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show() ###Output _____no_output_____ ###Markdown With the default parameters, the system fluctuates around a resting state with the noisy input. ###Code # @markdown Execute this cell to plot activity with noisy input current pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=20201) pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=20202) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Short pulse induced persistent activityThen, let's use a brief 10-ms positive current to the E population when the system is at its equilibrium. When this amplitude (SE below) is sufficiently large, a persistent activity is produced that outlasts the transient input. What is the firing rate of the persistent activity, and what is the critical input strength? Try to understand the phenomena from the above phase-plane analysis. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def my_inject(pars, t_start, t_lag=10.): """ Expects: pars : parameter dictionary t_start : pulse starts [ms] t_lag : pulse lasts [ms] Returns: I : extra pulse time """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize I = np.zeros(Lt) # pulse timing N_start = int(t_start / dt) N_lag = int(t_lag / dt) I[N_start:N_start + N_lag] = 1. return I pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=2021) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 # pulse I_pulse = my_inject(pars, t_start=20., t_lag=10.) L_pulse = sum(I_pulse > 0.) def WC_with_pulse(SE=0.): pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=2022) pars['I_ext_E'] += SE * I_pulse rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.plot(pars['range_t'][I_pulse > 0.], 1.0*np.ones(L_pulse), 'r', lw=3.) ax.text(25, 1.05, 'stimulus on', horizontalalignment='center', verticalalignment='bottom') ax.set_ylim(-0.03, 1.2) ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() _ = widgets.interact(WC_with_pulse, SE=(0.0, 1.0, .05)) ###Output _____no_output_____ ###Markdown &nbsp; Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom, Siddharth Suresh **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 1 hour, 35 minutes*In the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.Reference paper:Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12, doi: [10.1016/S0006-3495(72)86068-5](https://doi.org/10.1016/S0006-3495(72)86068-5). ###Code # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nvuty/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Plotting Functions def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) # @title Helper Functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)` --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1CD4y1m7dK", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown This video explains how to model a network with interacting populations of excitatory and inhibitory neurons (the Wilson-Cowan model). It shows how to solve the network activity vs. time and introduces the phase plane in two dimensions. Section 1.1: Mathematical description of the WC model*Estimated timing to here from start of tutorial: 12 min* Click here for text recap of relevant part of video Many of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Coding Exercise 1.1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the helper function `F` with default parameter values. ###Code help(F) pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # raise NotImplementedError('student exercise: compute F-I curves of excitatory and inhibitory populations') ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Visualize plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_043dd600.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan model*Estimated timing to here from start of tutorial: 20 min*Once again, we can integrate our equations numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Coding Exercise 1.2: Numerically integrate the Wilson-Cowan equationsWe will implemenent this numerical simulation of our equations and visualize two simulations with similar initial points. ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Simulate first trajectory rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # Simulate second trajectory rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # Visualize my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_15eff812.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo 1.2: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. We change the initial activity of the excitatory population.What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_50331264.py) Think! 1.2It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysis*Estimated timing to here from start of tutorial: 45 min*Just like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. You have seen this before in the [pre-reqs calculus day](https://compneuro.neuromatch.io/tutorials/W0D4_Calculus/student/W0D4_Tutorial3.htmlsection-3-2-phase-plane-plot-and-nullcline) and on the [Linear Systems day](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial1.htmlsection-4-stream-plots)So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV15k4y1m7Kt", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Interactive Demo 2: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5d1fcb72.py) Section 2.1: Nullclines of the Wilson-Cowan Equations*Estimated timing to here from start of tutorial: 1 hour, 3 min*An important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Coding Exercise 2.1: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]} \qquad (5)\end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions.The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse # Set parameters pars = default_pars() x = np.linspace(1e-6, 1, 100) # Get inverse and visualize plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_f3500f59.py)*Example output:* Now you can compute the nullclines, using Equations 4-5 (repeated here for ease of access):\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}\begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align} ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE # Set parameters pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Compute nullclines Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # Visualize plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_db10856b.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector field*Estimated timing to here from start of tutorial: 1 hour, 20 min*How can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? Click here for text recap of relevant part of video The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Coding Exercise 2.2: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Create vector field using EIderivs plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesIn the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)`- Plotting utilities --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Section 1.1: Mathematical description of the WC modelMany of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Exercise 1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the function defined above with default parameter values. ###Code pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Uncomment when you fill the (...) # plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_b3a0ec15.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan modelEquation $1$ can be integrated numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Exercise 2: Numerically integrate the Wilson-Cowan equations ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Here are two trajectories with close intial values # Uncomment these lines to test your function # rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_af0bd722.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown Think!It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysisJust like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Interactive Demo: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_222c9db1.py) Section 2.1: Nullclines of the Wilson-Cowan EquationsAn important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Exercise 3: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse pars = default_pars() x = np.linspace(1e-6, 1, 100) # Uncomment the next line to test your function # plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_937a4040.py)*Example output:* Now you can compute the nullclines, using Equations 4-5: ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Uncomment these lines to test your functions # Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) # Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_2366ea57.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector fieldHow can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Exercise 4: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Uncomment below to test your function # plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5a629797.py)*Example output:* The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field- Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. Think! There are, in total, three intersection points, meaning that the system has three fixed points.- One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? - Why the arrows tend to get smaller as they approach the fixed points? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_3d37729b.py) --- SummaryCongratulations! You have finished the second day of the last week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons.In the last tutorial on dynamical neuronal networks you learned to:- Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model- Plot the frequency-current (F-I) curves for both populations- Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**.Do you have more time? Have you finished early? We have more fun material for you!Below are some, more advanced concepts on dynamical systems:- You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**.- You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations.If you need even more, there are two applications of the Wilson-Cowan model:- Visualization of an Inhibition-stabilized network- Simulation of working memory --- Bonus 1: Fixed points, stability analysis, and limit cycles in the Wilson-Cowan model ###Code # @title Video 3: Fixed points and their stability from IPython.display import YouTubeVideo video = YouTubeVideo(id="jIx26iQ69ps", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ###Output _____no_output_____ ###Markdown Fixed Points of the E/I systemClearly, the intersection points of the two nullcline curves are the fixed points of the Wilson-Cowan model in Equation $(1)$. In the next exercise, we will find the coordinate of all fixed points for a given set of parameters.We'll make use of two functions, similar to ones we saw in the previous tutorial, which use a root-finding algorithm to find the fixed points of the system with Excitatory and Inhibitory populations. ###Code # @markdown *Execute the cell to define `my_fp` and `check_fp`* def my_fp(pars, rE_init, rI_init): """ Use opt.root function to solve Equations (2)-(3) from initial values """ tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] tau_I, a_I, theta_I = pars['tau_I'], pars['a_I'], pars['theta_I'] wEE, wEI = pars['wEE'], pars['wEI'] wIE, wII = pars['wIE'], pars['wII'] I_ext_E, I_ext_I = pars['I_ext_E'], pars['I_ext_I'] # define the right hand of wilson-cowan equations def my_WCr(x): rE, rI = x drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I y = np.array([drEdt, drIdt]) return y x0 = np.array([rE_init, rI_init]) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fp(pars, x_fp, mytol=1e-6): """ Verify (drE/dt)^2 + (drI/dt)^2< mytol Args: pars : Parameter dictionary fp : value of fixed point mytol : tolerance, default as 10^{-6} Returns : Whether it is a correct fixed point: True/False """ drEdt, drIdt = EIderivs(x_fp[0], x_fp[1], **pars) return drEdt**2 + drIdt**2 < mytol ###Output _____no_output_____ ###Markdown Exercise 5: Find the fixed points of the Wilson-Cowan modelFrom the above nullclines, we notice that the system features three fixed points with the parameters we used. To find their coordinates, we need to choose proper initial value to give to the `opt.root` function inside of the function `my_fp` we just defined, since the algorithm can only find fixed points in the vicinity of the initial value. In this exercise, you will use the function `my_fp` to find each of the fixed points by varying the initial values. Note that you can choose the values near the intersections of the nullclines as the initial values to calculate the fixed points. ###Code pars = default_pars() ###################################################################### # TODO: Provide initial values to calculate the fixed points # Check if x_fp's are the correct with the function check_fp(x_fp) # Hint: vary different initial values to find the correct fixed points # ###################################################################### # my_plot_nullcline(pars) # Find the first fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1) # Find the second fixed point # x_fp_2 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_2): # plot_fp(x_fp_2) # Find the third fixed point # x_fp_3 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_3): # plot_fp(x_fp_3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_0dd7ba5a.py)*Example output:* Stability of a fixed point and eigenvalues of the Jacobian MatrixFirst, let's first rewrite the system $1$ as:\begin{align}&\frac{dr_E}{dt} = G_E(r_E,r_I)\\[0.5mm]&\frac{dr_I}{dt} = G_I(r_E,r_I)\end{align}where\begin{align}&G_E(r_E,r_I) = \frac{1}{\tau_E} [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a,\theta)]\\[1mm]&G_I(r_E,r_I) = \frac{1}{\tau_I} [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a,\theta)]\end{align}By definition, $\displaystyle\frac{dr_E}{dt}=0$ and $\displaystyle\frac{dr_I}{dt}=0$ at each fixed point. Therefore, if the initial state is exactly at the fixed point, the state of the system will not change as time evolves. However, if the initial state deviates slightly from the fixed point, there are two possibilitiesthe trajectory will be attracted back to the 1. The trajectory will be attracted back to the fixed point2. The trajectory will diverge from the fixed point. These two possibilities define the type of fixed point, i.e., stable or unstable. Similar to the 1D system studied in the previous tutorial, the stability of a fixed point $(r_E^*, r_I^*)$ can be determined by linearizing the dynamics of the system (can you figure out how?). The linearization will yield a matrix of first-order derivatives called the Jacobian matrix: \begin{equation} J= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial}{\partial r_E}}G_E(r_E^*, r_I^*) & \displaystyle{\frac{\partial}{\partial r_I}}G_E(r_E^*, r_I^*)\\[1mm] \displaystyle\frac{\partial}{\partial r_E} G_I(r_E^*, r_I^*) & \displaystyle\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) \\ \end{array} } \right] \quad (7)\end{equation}\\The eigenvalues of the Jacobian matrix calculated at the fixed point will determine whether it is a stable or unstable fixed point.\\We can now compute the derivatives needed to build the Jacobian matrix. Using the chain and product rules the derivatives for the excitatory population are given by:\\\begin{align}&\frac{\partial}{\partial r_E} G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)] \\[1mm]&\frac{\partial}{\partial r_I} G_E(r_E^*, r_I^*)= \frac{1}{\tau_E} [-w_{EI} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)]\end{align}\\The same applies to the inhibitory population. Exercise 6: Compute the Jacobian Matrix for the Wilson-Cowan modelHere, you can use `dF(x,a,theta)` defined in the `Helper functions` to calculate the derivative of the F-I curve. ###Code def get_eig_Jacobian(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Compute eigenvalues of the Wilson-Cowan Jacobian matrix at fixed point.""" # Initialization rE, rI = fp J = np.zeros((2, 2)) ########################################################################### # TODO for students: compute J and disable the error raise NotImplementedError("Student excercise: compute the Jacobian matrix") ########################################################################### # Compute the four elements of the Jacobian matrix J[0, 0] = ... J[0, 1] = ... J[1, 0] = ... J[1, 1] = ... # Compute and return the eigenvalues evals = np.linalg.eig(J)[0] return evals # Uncomment below to test your function when you get the correct fixed point # eig_1 = get_eig_Jacobian(x_fp_1, **pars) # eig_2 = get_eig_Jacobian(x_fp_2, **pars) # eig_3 = get_eig_Jacobian(x_fp_3, **pars) # print(eig_1, 'Stable point') # print(eig_2, 'Unstable point') # print(eig_3, 'Stable point') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_e83cfc05.py) As is evident, the stable fixed points correspond to the negative eigenvalues, while unstable point corresponds to at least one positive eigenvalue. The sign of the eigenvalues is determined by the connectivity (interaction) between excitatory and inhibitory populations. Below we investigate the effect of $w_{EE}$ on the nullclines and the eigenvalues of the dynamical system. \* _Critical change is referred to as **pitchfork bifurcation**_. Effect of `wEE` on the nullclines and the eigenvalues ###Code # @title # @markdown Make sure you execute this cell to see the plot! eig_1_M = [] eig_2_M = [] eig_3_M = [] pars = default_pars() wEE_grid = np.linspace(6, 10, 40) my_thre = 7.9 for wEE in wEE_grid: x_fp_1 = [0., 0.] x_fp_2 = [.4, .1] x_fp_3 = [.8, .1] pars['wEE'] = wEE if wEE < my_thre: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) else: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) x_fp_2 = my_fp(pars, x_fp_2[0], x_fp_2[1]) eig_2 = get_eig_Jacobian(x_fp_2, **pars) eig_2_M.append(np.max(np.real(eig_2))) x_fp_3 = my_fp(pars, x_fp_3[0], x_fp_3[1]) eig_3 = get_eig_Jacobian(x_fp_3, **pars) eig_3_M.append(np.max(np.real(eig_3))) eig_1_M = np.array(eig_1_M) eig_2_M = np.array(eig_2_M) eig_3_M = np.array(eig_3_M) plt.figure(figsize=(8, 5.5)) plt.plot(wEE_grid, eig_1_M, 'ko', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_2_M, 'bo', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_3_M, 'ro', alpha=0.5) plt.xlabel(r'$w_{\mathrm{EE}}$') plt.ylabel('maximum real part of eigenvalue') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Nullclines position in the phase plane changes with parameter valuesIn this interactive widget, we will explore how the nullclines move for different values of the parameter $w_{EE}$. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_nullcline_diffwEE(wEE): """ plot nullclines for different values of wEE """ pars = default_pars(wEE=wEE) # plot the E, I nullclines Exc_null_rE = np.linspace(-0.01, .96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, .8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12, 5.5)) plt.subplot(121) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.subplot(222) pars['rE_init'], pars['rI_init'] = 0.2, 0.2 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.title('E/I activity\nfor different initial conditions', fontweight='bold') plt.subplot(224) pars['rE_init'], pars['rI_init'] = 0.4, 0.1 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.tight_layout() plt.show() _ = widgets.interact(plot_nullcline_diffwEE, wEE=(6., 10., .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_d4eb0391.py) We can also investigate the effect of different $w_{EI}$, $w_{IE}$, $w_{II}$, $\tau_{E}$, $\tau_{I}$, and $I_{E}^{\text{ext}}$ on the stability of fixed points. In addition, we can also consider the perturbation of the parameters of the gain curve $F(\cdot)$. Limit cycle - OscillationsFor some values of interaction terms ($w_{EE}, w_{IE}, w_{EI}, w_{II}$ the eigenvalues can become complex. When at least one pair of eigenvalues is complex, oscillations arise. The stability of oscillations is determined by the real part of the eigenvalues (+ve real part oscillations will grow, -ve real part oscillations will die out). The size of the complex part determines the frequency of oscillations. For instance, if we use a different set of parameters, $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, and $I_{E}^{\text{ext}}=0.8$, then we shall observe that the E and I population activity start to oscillate! Please execute the cell below to check the oscillatory behavior. ###Code # @title # @markdown Make sure you execute this cell to see the oscillations! pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Exercise 7: Plot the phase planeWe can also understand the oscillations of the population behavior using the phase plane. By plotting a set of trajectories with different initial states, we can see that these trajectories will move in a circle instead of converging to a fixed point. This circle is called "limit cycle" and shows the periodic oscillations of the $E$ and $I$ population behavior under some conditions.Try to plot the phase plane using the previously defined functions. ###Code pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 plt.figure(figsize=(7, 5.5)) my_plot_nullcline(pars) ############################################################################### # TODO for students: plot phase plane: nullclines, trajectories, fixed point # ############################################################################### # Find the correct fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1, position=(0, 0), rotation=40) my_plot_trajectories(pars, 0.2, 3, 'Sample trajectories \nwith different initial values') my_plot_vector(pars) plt.legend(loc=[1.01, 0.7]) plt.xlim(-0.05, 1.01) plt.ylim(-0.05, 0.65) plt.show() ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_03c5c8dd.py)*Example output:* Interactive Demo: Limit cycle and oscillations.From the above examples, the change of model parameters changes the shape of the nullclines and, accordingly, the behavior of the $E$ and $I$ populations from steady fixed points to oscillations. However, the shape of the nullclines is unable to fully determine the behavior of the network. The vector field also matters. To demonstrate this, here, we will investigate the effect of time constants on the population behavior. By changing the inhibitory time constant $\tau_I$, the nullclines do not change, but the network behavior changes substantially from steady state to oscillations with different frequencies. Such a dramatic change in the system behavior is referred to as a **bifurcation**. \\Please execute the code below to check this out. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def time_constant_effect(tau_i=0.5): pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = tau_i Exc_null_rE = np.linspace(0.0, .9, 100) Inh_null_rI = np.linspace(0.0, .6, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12.5, 5.5)) plt.subplot(121) # nullclines plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline', zorder=2) plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline', zorder=2) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') # fixed point x_fp_1 = my_fp(pars, 0.5, 0.5) plt.plot(x_fp_1[0], x_fp_1[1], 'ko', zorder=2) eig_1 = get_eig_Jacobian(x_fp_1, **pars) # trajectories for ie in range(5): for ii in range(5): pars['rE_init'], pars['rI_init'] = 0.1 * ie, 0.1 * ii rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, 'k', alpha=0.3, zorder=1) # vector field EI_grid_E = np.linspace(0., 1.0, 20) EI_grid_I = np.linspace(0., 0.6, 20) rE, rI = np.meshgrid(EI_grid_E, EI_grid_I) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=10, facecolor='c') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.subplot(122) # sample E/I trajectories pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.legend(loc='best') plt.tight_layout() plt.show() _ = widgets.interact(time_constant_effect, tau_i=(0.2, 3, .1)) ###Output _____no_output_____ ###Markdown Both $\tau_E$ and $\tau_I$ feature in the Jacobian of the two population network (eq 7). So here is seems that the by increasing $\tau_I$ the eigenvalues corresponding to the stable fixed point are becoming complex.Intuitively, when $\tau_I$ is smaller, inhibitory activity changes faster than excitatory activity. As inhibition exceeds above a certain value, high inhibition inhibits excitatory population but that in turns means that inhibitory population gets smaller input (from the exc. connection). So inhibition decreases rapidly. But this means that excitation recovers -- and so on ... --- Bonus 2: Inhibition-stabilized network (ISN)As described above, one can obtain the linear approximation around the fixed point as \begin{equation} \frac{d}{dr} \vec{R}= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial G_E}{\partial r_E}} & \displaystyle{\frac{\partial G_E}{\partial r_I}}\\[1mm] \displaystyle\frac{\partial G_I}{\partial r_E} & \displaystyle\frac{\partial G_I}{\partial r_I} \\ \end{array} } \right] \vec{R},\end{equation}\\where $\vec{R} = [r_E, r_I]^{\rm T}$ is the vector of the E/I activity.Let's direct our attention to the excitatory subpopulation which follows:\\\begin{equation}\frac{dr_E}{dt} = \frac{\partial G_E}{\partial r_E}\cdot r_E + \frac{\partial G_E}{\partial r_I} \cdot r_I\end{equation}\\Recall that, around fixed point $(r_E^*, r_I^*)$:\\\begin{align}&\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (8)\\[1mm]&\frac{\partial}{\partial r_I}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-w_{EI} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (9)\\[1mm]&\frac{\partial}{\partial r_E}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [w_{IE} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (10)\\[1mm]&\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [-1-w_{II} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (11)\end{align} \\From Equation. (8), it is clear that $\displaystyle{\frac{\partial G_E}{\partial r_I}}$ is negative since the $\displaystyle{\frac{dF}{dx}}$ is always positive. It can be understood by that the recurrent inhibition from the inhibitory activity ($I$) can reduce the excitatory ($E$) activity. However, as described above, $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ has negative terms related to the "leak" effect, and positive term related to the recurrent excitation. Therefore, it leads to two different regimes:- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}>0$, **inhibition-stabilizednetwork (ISN) regime** Exercise 8: Compute $\displaystyle{\frac{\partial G_E}{\partial r_E}}$Implemet the function to calculate the $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ for the default parameters, and the parameters of the limit cycle case. ###Code def get_dGdE(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Simulate the Wilson-Cowan equations Args: fp : fixed point (E, I), array Other arguments are parameters of the Wilson-Cowan model Returns: J : the 2x2 Jacobian matrix """ rE, rI = fp ########################################################################## # TODO for students: compute dGdrE and disable the error raise NotImplementedError("Student excercise: compute the dG/dE, Eq. (13)") ########################################################################## # Calculate the J[0,0] dGdrE = ... return dGdrE # Uncomment below to test your function pars = default_pars() x_fp_1 = my_fp(pars, 0.1, 0.1) x_fp_2 = my_fp(pars, 0.3, 0.3) x_fp_3 = my_fp(pars, 0.8, 0.6) # dGdrE1 = get_dGdE(x_fp_1, **pars) # dGdrE2 = get_dGdE(x_fp_2, **pars) # dGdrE3 = get_dGdE(x_fp_3, **pars) print(f'For the default case:') # print(f'dG/drE(fp1) = {dGdrE1:.3f}') # print(f'dG/drE(fp2) = {dGdrE2:.3f}') # print(f'dG/drE(fp3) = {dGdrE3:.3f}') print('\n') pars = default_pars(wEE=6.4, wEI=4.8, wIE=6.0, wII=1.2, I_ext_E=0.8) x_fp_lc = my_fp(pars, 0.8, 0.8) # dGdrE_lc = get_dGdE(x_fp_lc, **pars) print('For the limit cycle case:') # print(f'dG/drE(fp_lc) = {dGdrE_lc:.3f}') ###Output _____no_output_____ ###Markdown **SAMPLE OUTPUT**```For the default case:dG/drE(fp1) = -0.650dG/drE(fp2) = 1.519dG/drE(fp3) = -0.706For the limit cycle case:dG/drE(fp_lc) = 0.837``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_1ff7a08c.py) Nullcline analysis of the ISNRecall that the E nullcline follows\\\begin{align}r_E = F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E). \end{align}\\That is, the firing rate $r_E$ can be a function of $r_I$. Let's take the derivative of $r_E$ over $r_I$, and obtain\\\begin{align}&\frac{dr_E}{dr_I} = F_E' \cdot (w_{EE}\frac{dr_E}{dr_I} -w_{EI}) \iff \\&(1-F_E'w_{EE})\frac{dr_E}{dr_I} = -F_E' w_{EI} \iff \\&\frac{dr_E}{dr_I} = \frac{F_E' w_{EI}}{F_E'w_{EE}-1}.\end{align}\\That is, in the phase plane `rI-rE`-plane, we can obtain the slope along the E nullcline as\\$$\frac{dr_I}{dr_E} = \frac{F_E'w_{EE}-1}{F_E' w_{EI}} \qquad (12)$$Similarly, we can obtain the slope along the I nullcline as \\$$\frac{dr_I}{dr_E} = \frac{F_I'w_{IE}}{F_I' w_{II}+1} \qquad (13)$$\\Then, we can find that $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline} >0$ in Equation (13).\\However, in Equation (12), the sign of $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}$ depends on the sign of $(F_E'w_{EE}-1)$. Note that, $(F_E'w_{EE}-1)$ is the same as what we show above (Equation (8)). Therefore, we can have the following results:- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}>0$, **inhibition-stabilizednetwork (ISN) regime**\\In addition, it is important to point out the following two conclusions: \\**Conclusion 1:** The stability of a fixed point can determine the relationship between the slopes Equations (12) and (13). As discussed above, the fixed point is stable when the Jacobian matrix ($J$ in Equation (7)) has two eigenvalues with a negative real part, which indicates a positive determinant of $J$, i.e., $\text{det}(J)>0$.From the Jacobian matrix definition and from Equations (8-11), we can obtain:$ J= \left[ {\begin{array}{cc} \displaystyle{\frac{1}{\tau_E}(w_{EE}F_E'-1)} & \displaystyle{-\frac{1}{\tau_E}w_{EI}F_E'}\\[1mm] \displaystyle {\frac{1}{\tau_I}w_{IE}F_I'}& \displaystyle {\frac{1}{\tau_I}(-w_{II}F_I'-1)} \\ \end{array} } \right] $\\Note that, if we let \\$ T= \left[ {\begin{array}{cc} \displaystyle{\tau_E} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle \tau_I \\ \end{array} } \right] $, $ F= \left[ {\begin{array}{cc} \displaystyle{F_E'} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle F_I' \\ \end{array} } \right] $, and $ W= \left[ {\begin{array}{cc} \displaystyle{w_{EE}} & \displaystyle{-w_{EI}}\\[1mm] \displaystyle w_{IE}& \displaystyle -w_{II} \\ \end{array} } \right] $\\then, using matrix notation, $J=T^{-1}(F W - I)$ where $I$ is the identity matrix, i.e., $I = \begin{bmatrix} 1 & 0 \\0 & 1 \end{bmatrix}.$ \\Therefore, $\det{(J)}=\det{(T^{-1}(F W - I))}=(\det{(T^{-1})})(\det{(F W - I)}).$Since $\det{(T^{-1})}>0$, as time constants are positive by definition, the sign of $\det{(J)}$ is the same as the sign of $\det{(F W - I)}$, and so$$\det{(FW - I)} = (F_E' w_{EI})(F_I'w_{IE}) - (F_I' w_{II} + 1)(F_E'w_{EE} - 1) > 0.$$\\Then, combining this with Equations (12) and (13), we can obtain$$\frac{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline}}{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}} > 1. $$Therefore, at the stable fixed point, I nullcline has a steeper slope than the E nullcline. **Conclusion 2:** Effect of adding input to the inhibitory population.While adding the input $\delta I^{\rm ext}_I$ into the inhibitory population, we can find that the E nullcline (Equation (5)) stays the same, while the I nullcline has a pure left shift: the original I nullcline equation,\\\begin{equation}r_I = F_I(w_{IE}r_E-w_{II}r_I + I^{\text{ext}}_I ; \alpha_I, \theta_I)\end{equation}\\remains true if we take $I^{\text{ext}}_I \rightarrow I^{\text{ext}}_I +\delta I^{\rm ext}_I$ and $r_E\rightarrow r_E'=r_E-\frac{\delta I^{\rm ext}_I}{w_{IE}}$ to obtain\\\begin{equation}r_I = F_I(w_{IE}r_E'-w_{II}r_I + I^{\text{ext}}_I +\delta I^{\rm ext}_I; \alpha_I, \theta_I)\end{equation}\\Putting these points together, we obtain the phase plane pictures shown below. After adding input to the inhibitory population, it can be seen in the trajectories above and the phase plane below that, in an **ISN**, $r_I$ will increase first but then decay to the new fixed point in which both $r_I$ and $r_E$ are decreased compared to the original fixed point. However, by adding $\delta I^{\rm ext}_I$ into a **non-ISN**, $r_I$ will increase while $r_E$ will decrease. Interactive Demo: Nullclines of Example **ISN** and **non-ISN**In this interactive widget, we inject excitatory ($I^{\text{ext}}_I>0$) or inhibitory ($I^{\text{ext}}_I<0$) drive into the inhibitory population when the system is at its equilibrium (with parameters $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, $I_{E}^{\text{ext}}=0.8$, $\tau_I = 0.8$, and $I^{\text{ext}}_I=0$). How does the firing rate of the $I$ population changes with excitatory vs inhibitory drive into the inhibitory population? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=50., dt=0.1) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = 0.8 def ISN_I_perturb(dI=0.1): Lt = len(pars['range_t']) pars['I_ext_I'] = np.zeros(Lt) pars['I_ext_I'][int(Lt / 2):] = dI pars['rE_init'], pars['rI_init'] = 0.6, 0.26 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 1.5)) plt.plot(pars['range_t'], pars['I_ext_I'], 'k') plt.xlabel('t (ms)') plt.ylabel(r'$I_I^{\mathrm{ext}}$') plt.ylim(pars['I_ext_I'].min() - 0.01, pars['I_ext_I'].max() + 0.01) plt.show() plt.figure(figsize=(8, 4.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rE[int(Lt / 2) - 1] * np.ones(Lt), 'b--') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'], rI[int(Lt / 2) - 1] * np.ones(Lt), 'r--') plt.ylim(0, 0.8) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(ISN_I_perturb, dI=(-0.2, 0.21, .05)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_cec4906e.py) --- Bonus 3: Fixed point and working memory The input into the neurons measured in the experiment is often very noisy ([links](http://www.scholarpedia.org/article/Stochastic_dynamical_systems)). Here, the noisy synaptic input current is modeled as an Ornstein-Uhlenbeck (OU)process, which has been discussed several times in the previous tutorials. ###Code # @markdown Make sure you execute this cell to enable the function my_OU and plot the input current! def my_OU(pars, sig, myseed=False): """ Expects: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt-1): I_ou[it+1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars(T=50) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show() ###Output _____no_output_____ ###Markdown With the default parameters, the system fluctuates around a resting state with the noisy input. ###Code # @markdown Execute this cell to plot activity with noisy input current pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=20201) pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=20202) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Short pulse induced persistent activityThen, let's use a brief 10-ms positive current to the E population when the system is at its equilibrium. When this amplitude (SE below) is sufficiently large, a persistent activity is produced that outlasts the transient input. What is the firing rate of the persistent activity, and what is the critical input strength? Try to understand the phenomena from the above phase-plane analysis. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def my_inject(pars, t_start, t_lag=10.): """ Expects: pars : parameter dictionary t_start : pulse starts [ms] t_lag : pulse lasts [ms] Returns: I : extra pulse time """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize I = np.zeros(Lt) # pulse timing N_start = int(t_start / dt) N_lag = int(t_lag / dt) I[N_start:N_start + N_lag] = 1. return I pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=2021) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 # pulse I_pulse = my_inject(pars, t_start=20., t_lag=10.) L_pulse = sum(I_pulse > 0.) def WC_with_pulse(SE=0.): pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=2022) pars['I_ext_E'] += SE * I_pulse rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.plot(pars['range_t'][I_pulse > 0.], 1.0*np.ones(L_pulse), 'r', lw=3.) ax.text(25, 1.05, 'stimulus on', horizontalalignment='center', verticalalignment='bottom') ax.set_ylim(-0.03, 1.2) ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() _ = widgets.interact(WC_with_pulse, SE=(0.0, 1.0, .05)) ###Output _____no_output_____ ###Markdown [![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb) Tutorial 2: Wilson-Cowan Model**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesIn the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial.The objectives of this tutorial are to:- Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons- Simulate the dynamics of the system, i.e., Wilson-Cowan model.- Plot the frequency-current (F-I) curves for both populations (i.e., E and I).- Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**.Bonus steps:- Find and plot the **fixed points** of the Wilson-Cowan model.- Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**.- Learn how the Wilson-Cowan model can reach an oscillatory state.Bonus steps (applications):- Visualize the behavior of an Inhibition-stabilized network.- Simulate working memory using the Wilson-Cowan model.\\Reference paper:_[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)_ --- Setup ###Code # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation) ###Output _____no_output_____ ###Markdown The helper functions included:- Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)`- F-I curve: `F(x, a, theta)`- Derivative of the F-I curve: `dF(x, a, theta)`- Plotting utilities --- Section 1: Wilson-Cowan model of excitatory and inhibitory populations ###Code # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Section 1.1: Mathematical description of the WC modelMany of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population:\begin{align}\tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\\tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1)\end{align}$r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. Exercise 1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the function defined above with default parameter values. ###Code pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Uncomment when you fill the (...) # plot_FI_EI(x, FI_exc, FI_inh) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_b3a0ec15.py)*Example output:* Section 1.2: Simulation scheme for the Wilson-Cowan modelEquation $1$ can be integrated numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as:\begin{align}r_E[k+1] &= r_E[k] + \Delta r_E[k]\\r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align}with the increments\begin{align}\Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\\Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align} Exercise 2: Numerically integrate the Wilson-Cowan equations ###Code def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Here are two trajectories with close intial values # Uncomment these lines to test your function # rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_af0bd722.py)*Example output:* The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions. Interactive Demo: population trajectories with different initial valuesIn this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. What happens to the E and I population trajectories with different initial conditions? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01)) ###Output _____no_output_____ ###Markdown Think!It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first. --- Section 2: Phase plane analysisJust like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time. ###Code # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Interactive Demo: From the Activity - time plane to the **$r_I$ - $r_E$** phase planeIn this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation.Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_222c9db1.py) Section 2.1: Nullclines of the Wilson-Cowan EquationsAn important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change.In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is:\begin{align}-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm]-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3)\end{align} Exercise 3: Compute the nullclines of the Wilson-Cowan modelIn the next exercise, we will compute and plot the nullclines of the E and I population. Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into\begin{align}r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4)\end{align}where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline. Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align}r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline. Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\The inverse of the sigmoid shaped **f-I** function that we have been using is:$$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$The first step is to implement the inverse transfer function: ###Code def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse pars = default_pars() x = np.linspace(1e-6, 1, 100) # Uncomment the next line to test your function # plot_FI_inverse(x, a=1, theta=3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_937a4040.py)*Example output:* Now you can compute the nullclines, using Equations 4-5: ###Code def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Uncomment these lines to test your functions # Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) # Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_2366ea57.py)*Example output:* Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$.The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. Section 2.2: Vector fieldHow can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$.In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively. Exercise 4: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$Note that\begin{align}\frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\\frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align} ###Code def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Uncomment below to test your function # plot_complete_analysis(default_pars()) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5a629797.py)*Example output:* The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field- Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. Think! There are, in total, three intersection points, meaning that the system has three fixed points.- One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? - Why the arrows tend to get smaller as they approach the fixed points? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_3d37729b.py) --- SummaryCongratulations! You have finished the second day of the last week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons.In the last tutorial on dynamical neuronal networks you learned to:- Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model- Plot the frequency-current (F-I) curves for both populations- Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**.Do you have more time? Have you finished early? We have more fun material for you!Below are some, more advanced concepts on dynamical systems:- You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**.- You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations.If you need even more, there are two applications of the Wilson-Cowan model:- Visualization of an Inhibition-stabilized network- Simulation of working memory --- Bonus 1: Fixed points, stability analysis, and limit cycles in the Wilson-Cowan model ###Code # @title Video 3: Fixed points and their stability from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="jIx26iQ69ps", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Fixed Points of the E/I systemClearly, the intersection points of the two nullcline curves are the fixed points of the Wilson-Cowan model in Equation $(1)$. In the next exercise, we will find the coordinate of all fixed points for a given set of parameters.We'll make use of two functions, similar to ones we saw in the previous tutorial, which use a root-finding algorithm to find the fixed points of the system with Excitatory and Inhibitory populations. ###Code # @markdown *Execute the cell to define `my_fp` and `check_fp`* def my_fp(pars, rE_init, rI_init): """ Use opt.root function to solve Equations (2)-(3) from initial values """ tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] tau_I, a_I, theta_I = pars['tau_I'], pars['a_I'], pars['theta_I'] wEE, wEI = pars['wEE'], pars['wEI'] wIE, wII = pars['wIE'], pars['wII'] I_ext_E, I_ext_I = pars['I_ext_E'], pars['I_ext_I'] # define the right hand of wilson-cowan equations def my_WCr(x): rE, rI = x drEdt = (-rE + F(wEE * rE - wEI * rI + I_ext_E, a_E, theta_E)) / tau_E drIdt = (-rI + F(wIE * rE - wII * rI + I_ext_I, a_I, theta_I)) / tau_I y = np.array([drEdt, drIdt]) return y x0 = np.array([rE_init, rI_init]) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fp(pars, x_fp, mytol=1e-6): """ Verify (drE/dt)^2 + (drI/dt)^2< mytol Args: pars : Parameter dictionary fp : value of fixed point mytol : tolerance, default as 10^{-6} Returns : Whether it is a correct fixed point: True/False """ drEdt, drIdt = EIderivs(x_fp[0], x_fp[1], **pars) return drEdt**2 + drIdt**2 < mytol ###Output _____no_output_____ ###Markdown Exercise 5: Find the fixed points of the Wilson-Cowan modelFrom the above nullclines, we notice that the system features three fixed points with the parameters we used. To find their coordinates, we need to choose proper initial value to give to the `opt.root` function inside of the function `my_fp` we just defined, since the algorithm can only find fixed points in the vicinity of the initial value. In this exercise, you will use the function `my_fp` to find each of the fixed points by varying the initial values. Note that you can choose the values near the intersections of the nullclines as the initial values to calculate the fixed points. ###Code pars = default_pars() ###################################################################### # TODO: Provide initial values to calculate the fixed points # Check if x_fp's are the correct with the function check_fp(x_fp) # Hint: vary different initial values to find the correct fixed points # ###################################################################### # my_plot_nullcline(pars) # Find the first fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1) # Find the second fixed point # x_fp_2 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_2): # plot_fp(x_fp_2) # Find the third fixed point # x_fp_3 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_3): # plot_fp(x_fp_3) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_0dd7ba5a.py)*Example output:* Stability of a fixed point and eigenvalues of the Jacobian MatrixFirst, let's first rewrite the system $1$ as:\begin{align}&\frac{dr_E}{dt} = G_E(r_E,r_I)\\[0.5mm]&\frac{dr_I}{dt} = G_I(r_E,r_I)\end{align}where\begin{align}&G_E(r_E,r_I) = \frac{1}{\tau_E} [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a,\theta)]\\[1mm]&G_I(r_E,r_I) = \frac{1}{\tau_I} [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a,\theta)]\end{align}By definition, $\displaystyle\frac{dr_E}{dt}=0$ and $\displaystyle\frac{dr_I}{dt}=0$ at each fixed point. Therefore, if the initial state is exactly at the fixed point, the state of the system will not change as time evolves. However, if the initial state deviates slightly from the fixed point, there are two possibilitiesthe trajectory will be attracted back to the 1. The trajectory will be attracted back to the fixed point2. The trajectory will diverge from the fixed point. These two possibilities define the type of fixed point, i.e., stable or unstable. Similar to the 1D system studied in the previous tutorial, the stability of a fixed point $(r_E^*, r_I^*)$ can be determined by linearizing the dynamics of the system (can you figure out how?). The linearization will yield a matrix of first-order derivatives called the Jacobian matrix: \begin{equation} J= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial}{\partial r_E}}G_E(r_E^*, r_I^*) & \displaystyle{\frac{\partial}{\partial r_I}}G_E(r_E^*, r_I^*)\\[1mm] \displaystyle\frac{\partial}{\partial r_E} G_I(r_E^*, r_I^*) & \displaystyle\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) \\ \end{array} } \right] \quad (7)\end{equation}\\The eigenvalues of the Jacobian matrix calculated at the fixed point will determine whether it is a stable or unstable fixed point.\\We can now compute the derivatives needed to build the Jacobian matrix. Using the chain and product rules the derivatives for the excitatory population are given by:\\\begin{align}&\frac{\partial}{\partial r_E} G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)] \\[1mm]&\frac{\partial}{\partial r_I} G_E(r_E^*, r_I^*)= \frac{1}{\tau_E} [-w_{EI} F_E'(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E;\alpha_E, \theta_E)]\end{align}\\The same applies to the inhibitory population. Exercise 6: Compute the Jacobian Matrix for the Wilson-Cowan modelHere, you can use `dF(x,a,theta)` defined in the `Helper functions` to calculate the derivative of the F-I curve. ###Code def get_eig_Jacobian(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Compute eigenvalues of the Wilson-Cowan Jacobian matrix at fixed point.""" # Initialization rE, rI = fp J = np.zeros((2, 2)) ########################################################################### # TODO for students: compute J and disable the error raise NotImplementedError("Student excercise: compute the Jacobian matrix") ########################################################################### # Compute the four elements of the Jacobian matrix J[0, 0] = ... J[0, 1] = ... J[1, 0] = ... J[1, 1] = ... # Compute and return the eigenvalues evals = np.linalg.eig(J)[0] return evals # Uncomment below to test your function when you get the correct fixed point # eig_1 = get_eig_Jacobian(x_fp_1, **pars) # eig_2 = get_eig_Jacobian(x_fp_2, **pars) # eig_3 = get_eig_Jacobian(x_fp_3, **pars) # print(eig_1, 'Stable point') # print(eig_2, 'Unstable point') # print(eig_3, 'Stable point') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_e83cfc05.py) As is evident, the stable fixed points correspond to the negative eigenvalues, while unstable point corresponds to at least one positive eigenvalue. The sign of the eigenvalues is determined by the connectivity (interaction) between excitatory and inhibitory populations. Below we investigate the effect of $w_{EE}$ on the nullclines and the eigenvalues of the dynamical system. \* _Critical change is referred to as **pitchfork bifurcation**_. Effect of `wEE` on the nullclines and the eigenvalues ###Code # @title # @markdown Make sure you execute this cell to see the plot! eig_1_M = [] eig_2_M = [] eig_3_M = [] pars = default_pars() wEE_grid = np.linspace(6, 10, 40) my_thre = 7.9 for wEE in wEE_grid: x_fp_1 = [0., 0.] x_fp_2 = [.4, .1] x_fp_3 = [.8, .1] pars['wEE'] = wEE if wEE < my_thre: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) else: x_fp_1 = my_fp(pars, x_fp_1[0], x_fp_1[1]) eig_1 = get_eig_Jacobian(x_fp_1, **pars) eig_1_M.append(np.max(np.real(eig_1))) x_fp_2 = my_fp(pars, x_fp_2[0], x_fp_2[1]) eig_2 = get_eig_Jacobian(x_fp_2, **pars) eig_2_M.append(np.max(np.real(eig_2))) x_fp_3 = my_fp(pars, x_fp_3[0], x_fp_3[1]) eig_3 = get_eig_Jacobian(x_fp_3, **pars) eig_3_M.append(np.max(np.real(eig_3))) eig_1_M = np.array(eig_1_M) eig_2_M = np.array(eig_2_M) eig_3_M = np.array(eig_3_M) plt.figure(figsize=(8, 5.5)) plt.plot(wEE_grid, eig_1_M, 'ko', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_2_M, 'bo', alpha=0.5) plt.plot(wEE_grid[wEE_grid >= my_thre], eig_3_M, 'ro', alpha=0.5) plt.xlabel(r'$w_{\mathrm{EE}}$') plt.ylabel('maximum real part of eigenvalue') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Nullclines position in the phase plane changes with parameter valuesIn this interactive widget, we will explore how the nullclines move for different values of the parameter $w_{EE}$. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def plot_nullcline_diffwEE(wEE): """ plot nullclines for different values of wEE """ pars = default_pars(wEE=wEE) # plot the E, I nullclines Exc_null_rE = np.linspace(-0.01, .96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, .8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12, 5.5)) plt.subplot(121) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.subplot(222) pars['rE_init'], pars['rI_init'] = 0.2, 0.2 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.title('E/I activity\nfor different initial conditions', fontweight='bold') plt.subplot(224) pars['rE_init'], pars['rI_init'] = 0.4, 0.1 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label='E population', clip_on=False) plt.plot(pars['range_t'], rI, 'r', label='I population', clip_on=False) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.ylim(-0.05, 1.05) plt.tight_layout() plt.show() _ = widgets.interact(plot_nullcline_diffwEE, wEE=(6., 10., .01)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_d4eb0391.py) We can also investigate the effect of different $w_{EI}$, $w_{IE}$, $w_{II}$, $\tau_{E}$, $\tau_{I}$, and $I_{E}^{\text{ext}}$ on the stability of fixed points. In addition, we can also consider the perturbation of the parameters of the gain curve $F(\cdot)$. Limit cycle - OscillationsFor some values of interaction terms ($w_{EE}, w_{IE}, w_{EI}, w_{II}$ the eigenvalues can become complex. When at least one pair of eigenvalues is complex, oscillations arise. The stability of oscillations is determined by the real part of the eigenvalues (+ve real part oscillations will grow, -ve real part oscillations will die out). The size of the complex part determines the frequency of oscillations. For instance, if we use a different set of parameters, $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, and $I_{E}^{\text{ext}}=0.8$, then we shall observe that the E and I population activity start to oscillate! Please execute the cell below to check the oscillatory behavior. ###Code # @title # @markdown Make sure you execute this cell to see the oscillations! pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Exercise 7: Plot the phase planeWe can also understand the oscillations of the population behavior using the phase plane. By plotting a set of trajectories with different initial states, we can see that these trajectories will move in a circle instead of converging to a fixed point. This circle is called "limit cycle" and shows the periodic oscillations of the $E$ and $I$ population behavior under some conditions.Try to plot the phase plane using the previously defined functions. ###Code pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 plt.figure(figsize=(7, 5.5)) my_plot_nullcline(pars) ############################################################################### # TODO for students: plot phase plane: nullclines, trajectories, fixed point # ############################################################################### # Find the correct fixed point # x_fp_1 = my_fp(pars, ..., ...) # if check_fp(pars, x_fp_1): # plot_fp(x_fp_1, position=(0, 0), rotation=40) my_plot_trajectories(pars, 0.2, 3, 'Sample trajectories \nwith different initial values') my_plot_vector(pars) plt.legend(loc=[1.01, 0.7]) plt.xlim(-0.05, 1.01) plt.ylim(-0.05, 0.65) plt.show() ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_03c5c8dd.py)*Example output:* Interactive Demo: Limit cycle and oscillations.From the above examples, the change of model parameters changes the shape of the nullclines and, accordingly, the behavior of the $E$ and $I$ populations from steady fixed points to oscillations. However, the shape of the nullclines is unable to fully determine the behavior of the network. The vector field also matters. To demonstrate this, here, we will investigate the effect of time constants on the population behavior. By changing the inhibitory time constant $\tau_I$, the nullclines do not change, but the network behavior changes substantially from steady state to oscillations with different frequencies. Such a dramatic change in the system behavior is referred to as a **bifurcation**. \\Please execute the code below to check this out. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def time_constant_effect(tau_i=0.5): pars = default_pars(T=100.) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = tau_i Exc_null_rE = np.linspace(0.0, .9, 100) Inh_null_rI = np.linspace(0.0, .6, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.figure(figsize=(12.5, 5.5)) plt.subplot(121) # nullclines plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline', zorder=2) plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline', zorder=2) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') # fixed point x_fp_1 = my_fp(pars, 0.5, 0.5) plt.plot(x_fp_1[0], x_fp_1[1], 'ko', zorder=2) eig_1 = get_eig_Jacobian(x_fp_1, **pars) # trajectories for ie in range(5): for ii in range(5): pars['rE_init'], pars['rI_init'] = 0.1 * ie, 0.1 * ii rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, 'k', alpha=0.3, zorder=1) # vector field EI_grid_E = np.linspace(0., 1.0, 20) EI_grid_I = np.linspace(0., 0.6, 20) rE, rI = np.meshgrid(EI_grid_E, EI_grid_I) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=10, facecolor='c') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.subplot(122) # sample E/I trajectories pars['rE_init'], pars['rI_init'] = 0.25, 0.25 rE, rI = simulate_wc(**pars) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.title(r'$\tau_I=$'+'%.1f ms' % tau_i) plt.legend(loc='best') plt.tight_layout() plt.show() _ = widgets.interact(time_constant_effect, tau_i=(0.2, 3, .1)) ###Output _____no_output_____ ###Markdown Both $\tau_E$ and $\tau_I$ feature in the Jacobian of the two population network (eq 7). So here is seems that the by increasing $\tau_I$ the eigenvalues corresponding to the stable fixed point are becoming complex.Intuitively, when $\tau_I$ is smaller, inhibitory activity changes faster than excitatory activity. As inhibition exceeds above a certain value, high inhibition inhibits excitatory population but that in turns means that inhibitory population gets smaller input (from the exc. connection). So inhibition decreases rapidly. But this means that excitation recovers -- and so on ... --- Bonus 2: Inhibition-stabilized network (ISN)As described above, one can obtain the linear approximation around the fixed point as \begin{equation} \frac{d}{dr} \vec{R}= \left[ {\begin{array}{cc} \displaystyle{\frac{\partial G_E}{\partial r_E}} & \displaystyle{\frac{\partial G_E}{\partial r_I}}\\[1mm] \displaystyle\frac{\partial G_I}{\partial r_E} & \displaystyle\frac{\partial G_I}{\partial r_I} \\ \end{array} } \right] \vec{R},\end{equation}\\where $\vec{R} = [r_E, r_I]^{\rm T}$ is the vector of the E/I activity.Let's direct our attention to the excitatory subpopulation which follows:\\\begin{equation}\frac{dr_E}{dt} = \frac{\partial G_E}{\partial r_E}\cdot r_E + \frac{\partial G_E}{\partial r_I} \cdot r_I\end{equation}\\Recall that, around fixed point $(r_E^*, r_I^*)$:\\\begin{align}&\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-1 + w_{EE} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (8)\\[1mm]&\frac{\partial}{\partial r_I}G_E(r_E^*, r_I^*) = \frac{1}{\tau_E} [-w_{EI} F'_{E}(w_{EE}r_E^* -w_{EI}r_I^* + I^{\text{ext}}_E; \alpha_E, \theta_E)] \qquad (9)\\[1mm]&\frac{\partial}{\partial r_E}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [w_{IE} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (10)\\[1mm]&\frac{\partial}{\partial r_I}G_I(r_E^*, r_I^*) = \frac{1}{\tau_I} [-1-w_{II} F'_{I}(w_{IE}r_E^* -w_{II}r_I^* + I^{\text{ext}}_I; \alpha_I, \theta_I)] \qquad (11)\end{align} \\From Equation. (8), it is clear that $\displaystyle{\frac{\partial G_E}{\partial r_I}}$ is negative since the $\displaystyle{\frac{dF}{dx}}$ is always positive. It can be understood by that the recurrent inhibition from the inhibitory activity ($I$) can reduce the excitatory ($E$) activity. However, as described above, $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ has negative terms related to the "leak" effect, and positive term related to the recurrent excitation. Therefore, it leads to two different regimes:- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\displaystyle{\frac{\partial}{\partial r_E}G_E(r_E^*, r_I^*)}>0$, **inhibition-stabilizednetwork (ISN) regime** Exercise 8: Compute $\displaystyle{\frac{\partial G_E}{\partial r_E}}$Implemet the function to calculate the $\displaystyle{\frac{\partial G_E}{\partial r_E}}$ for the default parameters, and the parameters of the limit cycle case. ###Code def get_dGdE(fp, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Simulate the Wilson-Cowan equations Args: fp : fixed point (E, I), array Other arguments are parameters of the Wilson-Cowan model Returns: J : the 2x2 Jacobian matrix """ rE, rI = fp ########################################################################## # TODO for students: compute dGdrE and disable the error raise NotImplementedError("Student excercise: compute the dG/dE, Eq. (13)") ########################################################################## # Calculate the J[0,0] dGdrE = ... return dGdrE # Uncomment below to test your function pars = default_pars() x_fp_1 = my_fp(pars, 0.1, 0.1) x_fp_2 = my_fp(pars, 0.3, 0.3) x_fp_3 = my_fp(pars, 0.8, 0.6) # dGdrE1 = get_dGdE(x_fp_1, **pars) # dGdrE2 = get_dGdE(x_fp_2, **pars) # dGdrE3 = get_dGdE(x_fp_3, **pars) print(f'For the default case:') # print(f'dG/drE(fp1) = {dGdrE1:.3f}') # print(f'dG/drE(fp2) = {dGdrE2:.3f}') # print(f'dG/drE(fp3) = {dGdrE3:.3f}') print('\n') pars = default_pars(wEE=6.4, wEI=4.8, wIE=6.0, wII=1.2, I_ext_E=0.8) x_fp_lc = my_fp(pars, 0.8, 0.8) # dGdrE_lc = get_dGdE(x_fp_lc, **pars) print('For the limit cycle case:') # print(f'dG/drE(fp_lc) = {dGdrE_lc:.3f}') ###Output _____no_output_____ ###Markdown **SAMPLE OUTPUT**```For the default case:dG/drE(fp1) = -0.650dG/drE(fp2) = 1.519dG/drE(fp3) = -0.706For the limit cycle case:dG/drE(fp_lc) = 0.837``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_1ff7a08c.py) Nullcline analysis of the ISNRecall that the E nullcline follows\\\begin{align}r_E = F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E). \end{align}\\That is, the firing rate $r_E$ can be a function of $r_I$. Let's take the derivative of $r_E$ over $r_I$, and obtain\\\begin{align}&\frac{dr_E}{dr_I} = F_E' \cdot (w_{EE}\frac{dr_E}{dr_I} -w_{EI}) \iff \\&(1-F_E'w_{EE})\frac{dr_E}{dr_I} = -F_E' w_{EI} \iff \\&\frac{dr_E}{dr_I} = \frac{F_E' w_{EI}}{F_E'w_{EE}-1}.\end{align}\\That is, in the phase plane `rI-rE`-plane, we can obtain the slope along the E nullcline as\\$$\frac{dr_I}{dr_E} = \frac{F_E'w_{EE}-1}{F_E' w_{EI}} \qquad (12)$$Similarly, we can obtain the slope along the I nullcline as \\$$\frac{dr_I}{dr_E} = \frac{F_I'w_{IE}}{F_I' w_{II}+1} \qquad (13)$$\\Then, we can find that $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline} >0$ in Equation (13).\\However, in Equation (12), the sign of $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}$ depends on the sign of $(F_E'w_{EE}-1)$. Note that, $(F_E'w_{EE}-1)$ is the same as what we show above (Equation (8)). Therefore, we can have the following results:- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}<0$, **noninhibition-stabilizednetwork (non-ISN) regime**- $\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}>0$, **inhibition-stabilizednetwork (ISN) regime**\\In addition, it is important to point out the following two conclusions: \\**Conclusion 1:** The stability of a fixed point can determine the relationship between the slopes Equations (12) and (13). As discussed above, the fixed point is stable when the Jacobian matrix ($J$ in Equation (7)) has two eigenvalues with a negative real part, which indicates a positive determinant of $J$, i.e., $\text{det}(J)>0$.From the Jacobian matrix definition and from Equations (8-11), we can obtain:$ J= \left[ {\begin{array}{cc} \displaystyle{\frac{1}{\tau_E}(w_{EE}F_E'-1)} & \displaystyle{-\frac{1}{\tau_E}w_{EI}F_E'}\\[1mm] \displaystyle {\frac{1}{\tau_I}w_{IE}F_I'}& \displaystyle {\frac{1}{\tau_I}(-w_{II}F_I'-1)} \\ \end{array} } \right] $\\Note that, if we let \\$ T= \left[ {\begin{array}{cc} \displaystyle{\tau_E} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle \tau_I \\ \end{array} } \right] $, $ F= \left[ {\begin{array}{cc} \displaystyle{F_E'} & \displaystyle{0}\\[1mm] \displaystyle 0& \displaystyle F_I' \\ \end{array} } \right] $, and $ W= \left[ {\begin{array}{cc} \displaystyle{w_{EE}} & \displaystyle{-w_{EI}}\\[1mm] \displaystyle w_{IE}& \displaystyle -w_{II} \\ \end{array} } \right] $\\then, using matrix notation, $J=T^{-1}(F W - I)$ where $I$ is the identity matrix, i.e., $I = \begin{bmatrix} 1 & 0 \\0 & 1 \end{bmatrix}.$ \\Therefore, $\det{(J)}=\det{(T^{-1}(F W - I))}=(\det{(T^{-1})})(\det{(F W - I)}).$Since $\det{(T^{-1})}>0$, as time constants are positive by definition, the sign of $\det{(J)}$ is the same as the sign of $\det{(F W - I)}$, and so$$\det{(FW - I)} = (F_E' w_{EI})(F_I'w_{IE}) - (F_I' w_{II} + 1)(F_E'w_{EE} - 1) > 0.$$\\Then, combining this with Equations (12) and (13), we can obtain$$\frac{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm I-nullcline}}{\Big{(} \displaystyle{\frac{dr_I}{dr_E}} \Big{)}_{\rm E-nullcline}} > 1. $$Therefore, at the stable fixed point, I nullcline has a steeper slope than the E nullcline. **Conclusion 2:** Effect of adding input to the inhibitory population.While adding the input $\delta I^{\rm ext}_I$ into the inhibitory population, we can find that the E nullcline (Equation (5)) stays the same, while the I nullcline has a pure left shift: the original I nullcline equation,\\\begin{equation}r_I = F_I(w_{IE}r_E-w_{II}r_I + I^{\text{ext}}_I ; \alpha_I, \theta_I)\end{equation}\\remains true if we take $I^{\text{ext}}_I \rightarrow I^{\text{ext}}_I +\delta I^{\rm ext}_I$ and $r_E\rightarrow r_E'=r_E-\frac{\delta I^{\rm ext}_I}{w_{IE}}$ to obtain\\\begin{equation}r_I = F_I(w_{IE}r_E'-w_{II}r_I + I^{\text{ext}}_I +\delta I^{\rm ext}_I; \alpha_I, \theta_I)\end{equation}\\Putting these points together, we obtain the phase plane pictures shown below. After adding input to the inhibitory population, it can be seen in the trajectories above and the phase plane below that, in an **ISN**, $r_I$ will increase first but then decay to the new fixed point in which both $r_I$ and $r_E$ are decreased compared to the original fixed point. However, by adding $\delta I^{\rm ext}_I$ into a **non-ISN**, $r_I$ will increase while $r_E$ will decrease. Interactive Demo: Nullclines of Example **ISN** and **non-ISN**In this interactive widget, we inject excitatory ($I^{\text{ext}}_I>0$) or inhibitory ($I^{\text{ext}}_I<0$) drive into the inhibitory population when the system is at its equilibrium (with parameters $w_{EE}=6.4$, $w_{EI}=4.8$, $w_{IE}=6.$, $w_{II}=1.2$, $I_{E}^{\text{ext}}=0.8$, $\tau_I = 0.8$, and $I^{\text{ext}}_I=0$). How does the firing rate of the $I$ population changes with excitatory vs inhibitory drive into the inhibitory population? ###Code # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=50., dt=0.1) pars['wEE'], pars['wEI'] = 6.4, 4.8 pars['wIE'], pars['wII'] = 6.0, 1.2 pars['I_ext_E'] = 0.8 pars['tau_I'] = 0.8 def ISN_I_perturb(dI=0.1): Lt = len(pars['range_t']) pars['I_ext_I'] = np.zeros(Lt) pars['I_ext_I'][int(Lt / 2):] = dI pars['rE_init'], pars['rI_init'] = 0.6, 0.26 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 1.5)) plt.plot(pars['range_t'], pars['I_ext_I'], 'k') plt.xlabel('t (ms)') plt.ylabel(r'$I_I^{\mathrm{ext}}$') plt.ylim(pars['I_ext_I'].min() - 0.01, pars['I_ext_I'].max() + 0.01) plt.show() plt.figure(figsize=(8, 4.5)) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rE[int(Lt / 2) - 1] * np.ones(Lt), 'b--') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'], rI[int(Lt / 2) - 1] * np.ones(Lt), 'r--') plt.ylim(0, 0.8) plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(ISN_I_perturb, dI=(-0.2, 0.21, .05)) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_cec4906e.py) --- Bonus 3: Fixed point and working memory The input into the neurons measured in the experiment is often very noisy ([links](http://www.scholarpedia.org/article/Stochastic_dynamical_systems)). Here, the noisy synaptic input current is modeled as an Ornstein-Uhlenbeck (OU)process, which has been discussed several times in the previous tutorials. ###Code # @markdown Make sure you execute this cell to enable the function my_OU and plot the input current! def my_OU(pars, sig, myseed=False): """ Expects: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt-1): I_ou[it+1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars(T=50) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(8, 5.5)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show() ###Output _____no_output_____ ###Markdown With the default parameters, the system fluctuates around a resting state with the noisy input. ###Code # @markdown Execute this cell to plot activity with noisy input current pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=20201) pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=20202) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown Interactive Demo: Short pulse induced persistent activityThen, let's use a brief 10-ms positive current to the E population when the system is at its equilibrium. When this amplitude (SE below) is sufficiently large, a persistent activity is produced that outlasts the transient input. What is the firing rate of the persistent activity, and what is the critical input strength? Try to understand the phenomena from the above phase-plane analysis. ###Code # @title # @markdown Make sure you execute this cell to enable the widget! def my_inject(pars, t_start, t_lag=10.): """ Expects: pars : parameter dictionary t_start : pulse starts [ms] t_lag : pulse lasts [ms] Returns: I : extra pulse time """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize I = np.zeros(Lt) # pulse timing N_start = int(t_start / dt) N_lag = int(t_lag / dt) I[N_start:N_start + N_lag] = 1. return I pars = default_pars(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 pars['I_ext_I'] = my_OU(pars, sig=sig_ou, myseed=2021) pars['rE_init'], pars['rI_init'] = 0.1, 0.1 # pulse I_pulse = my_inject(pars, t_start=20., t_lag=10.) L_pulse = sum(I_pulse > 0.) def WC_with_pulse(SE=0.): pars['I_ext_E'] = my_OU(pars, sig=sig_ou, myseed=2022) pars['I_ext_E'] += SE * I_pulse rE, rI = simulate_wc(**pars) plt.figure(figsize=(8, 5.5)) ax = plt.subplot(111) ax.plot(pars['range_t'], rE, 'b', label='E population') ax.plot(pars['range_t'], rI, 'r', label='I population') ax.plot(pars['range_t'][I_pulse > 0.], 1.0*np.ones(L_pulse), 'r', lw=3.) ax.text(25, 1.05, 'stimulus on', horizontalalignment='center', verticalalignment='bottom') ax.set_ylim(-0.03, 1.2) ax.set_xlabel('t (ms)') ax.set_ylabel('Activity') ax.legend(loc='best') plt.show() _ = widgets.interact(WC_with_pulse, SE=(0.0, 1.0, .05)) ###Output _____no_output_____
Cls4-Supervised Learning - 1/Case Study 1 - Solution.ipynb
###Markdown 1. We will use acoustic features to distinguish a male voice from female. Load the dataset from “voice.csv”, identify the target variable and do a one-hot encoding for the same. Split the dataset in train-test with 20% of the data kept aside for testing.[Hint: Refer to LabelEncoder documentation in scikit-learn] 2. Fit a logistic regression model and measure the accuracy on the test set.[Hint: Refer to Linear Models section in scikit-learn] 3. Compute the correlation matrix that describes the dependence between all predictors and identify the predictors that are highly correlated. Plot the correlation matrix using seaborn heatmap.[Hint: Explore dataframe methods to identify appropriate method] 4. Based on correlation remove those predictors that are correlated and fit a logistic regression model again and compare the accuracy with that of previous model.[Hint: Identify correlated variable pairs and remove one among them] ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dfVoice = pd.read_csv("voice.csv") dfVoice.shape dfVoice.head() dfVoice.columns dfVoice.corr() dfVoice.label.unique() dfVoice.shape dfVoice.info() dfVoice['label']=dfVoice['label'].map({"male":1,"female":0}) dfVoice.info() dfVoice.sample(5) plt.figure(figsize=(15,10)) sns.heatmap(dfVoice.corr(),annot=True) features = ['sd','IQR','Q25','sp.ent','sfm','meanfun'] X=dfVoice[features] X.sample(5) y= dfVoice.label sns.countplot(dfVoice['label']) plt.show() from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression trainX, testX, trainy, testy = train_test_split(X,y,test_size=0.2) trainX.sample(5) trainy.sample(5) testy.sample(5) lrm = LogisticRegression(random_state=5) lrm.fit(trainX,trainy) pred_y = lrm.predict(testX) from sklearn.metrics import accuracy_score accuracy_score(testy,pred_y) newdf = pd.DataFrame({"Test":testy,"Prediction":pred_y}) X_all = dfVoice.drop('label',axis=1) train_X, test_X, train_y, test_y = train_test_split(X_all,y,test_size=0.2) lrm.fit(train_X, train_y) newpred_y = lrm.predict(test_X) accuracy_score(test_y,newpred_y) ###Output _____no_output_____
src/he_graphene_V.ipynb
###Markdown Comparing Interactions ###Code import numpy as np import matplotlib.pyplot as plt from matplotlib import cm import matplotlib.gridspec as gridspec from matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator) from scipy.interpolate import interp1d from scipy import interp, arange, array, exp from scipy.interpolate import InterpolatedUnivariateSpline import sys,os import dgutils.colors as colortools %matplotlib inline %config InlineBackend.figure_format = 'retina' # plot style plot_style = {'notebook':'../include/notebook.mplstyle','aps':'../include/aps.mplstyle'} plt.style.reload_library() plt.style.use(plot_style['aps']) figsize = plt.rcParams['figure.figsize'] plt.rcParams['text.latex.preamble'] = f'\input{{{os.getcwd()}/../include/texheader}}' He_Graphene_dat = np.loadtxt("../data/he_potential.txt") QMC_r = np.trim_zeros(He_Graphene_dat[:,0]) szalewicz_V = np.trim_zeros(He_Graphene_dat[:,1]) DFT_r = He_Graphene_dat[:,2] He_He_V_DFT = He_Graphene_dat[:,3] r_MP2 = np.trim_zeros(He_Graphene_dat[:,4]) V_He_MP2 = He_Graphene_dat[:len(r_MP2),5] QMC_z = np.trim_zeros(He_Graphene_dat[:,6]) QMC_V = He_Graphene_dat[:len(QMC_z),7] DFT_Z = np.trim_zeros(He_Graphene_dat[:,8]) He_Graphene_DFT = He_Graphene_dat[:len(DFT_Z),9] z_MP2 = np.trim_zeros(He_Graphene_dat[:,10]) V_G_MP2 = He_Graphene_dat[:len(z_MP2),11] # generate the QMC He-He interaction data import heprops QMC_r = np.linspace(2,5,1000) szalewicz_V = heprops.potential.szalewicz_2012(QMC_r) z_MP2_no_repeat = np.concatenate((z_MP2[0:4] , z_MP2[6:11], z_MP2[12:18], z_MP2[20:])) V_G_MP2_no_repeat = np.concatenate((V_G_MP2[0:4] , V_G_MP2[6:11], V_G_MP2[12:18], V_G_MP2[20:])) s = InterpolatedUnivariateSpline(z_MP2_no_repeat, V_G_MP2_no_repeat, k=2) V_MP2_G_interp = s(DFT_Z) colors = ["#d43e4e", "#abdda4", "#3288bc"] ###Output _____no_output_____ ###Markdown Plotting ###Code fig, ax = plt.subplots(2, 1, figsize = [3.4039, 2*2.10373], constrained_layout=True) ax[0].plot(QMC_r, szalewicz_V, color = colors[0]) ax[0].plot(DFT_r, He_He_V_DFT, color = colors[2]) ax[0].plot(r_MP2, V_He_MP2, label = 'MP2', color = colors[1], linewidth=1) ax[0].set_xlim(2.2, 3.9) ax[0].set_ylim(-50,95) #ax[0].set_title("He-He") ax[0].set_xlabel(r'$\alabel{r}{\angstrom}$') ax[0].set_ylabel(r"$\alabel{\mathcal{V}_{\rm He-He}}{\kelvin}$") ax[0].annotate('(a)', xy=(-0.18,1),ha='left', va='top', xycoords='axes fraction') #ax[0].xaxis.set_minor_locator(MultipleLocator(0.5)) #ax[0].tick_params(which='minor', direction="out", top = False, bottom=True, left=False, right=True, labelleft = False, # labelright = True, length=2.5) #ax[0].tick_params(which='major', direction="out", top = False, bottom=True, left=True, right=False) im = plt.imread('../plots/V_He_He.png',format='png') newax = fig.add_axes([0.33, 0.85, 0.225, 0.20]) newax.imshow(im,interpolation='none') newax.axis('off') ax[0].annotate(r'$r$', xy=(2.8,77),xytext=(2.8, 77), xycoords='data', ha='right', va='top') ax[1].plot(QMC_z, QMC_V, label = 'empirical', color = colors[0]) ax[1].plot(DFT_Z, He_Graphene_DFT, label = 'DFT', color = colors[2]) #ax[1].plot(zz,V1, label = 'Graphite') ax[1].plot(DFT_Z, V_MP2_G_interp, color = colors[1], linewidth=1) #ax[1].plot(z_Graphite_Composite,V_Graphite_Composite, label = 'Graphite?', color = colors[2], linewidth=1) ax[1].set_xlim(2.01, 5.7) ax[1].set_ylim(-400,425) ax[1].set_xlabel(r'$\alabel{z}{\angstrom}$') ax[1].set_ylabel(r"$\alabel{\mathcal{V}_{\rm He-\graphene}}{\kelvin}$") ax[1].yaxis.set_label_position("left") #ax[1].set_title("He-Graphene") ax[1].xaxis.set_minor_locator(MultipleLocator(0.5)) ax[1].annotate('(b)', xy=(-0.18,1),ha='left', va='top', xycoords='axes fraction') #ax[1].tick_params(which='minor', direction="out", top = False, bottom=True, left=True, right=False, labelleft = True, # labelright = False, length=2.5) #ax[1].tick_params(which='major', direction="out", top = False, bottom=True, left=True, right=False, labelleft = True, # labelright = False, length=4) im = plt.imread('../plots/V_He_graphene.png',format='png') newax = fig.add_axes([0.3, 0.275, 0.35, 0.20]) newax.imshow(im,interpolation='none') newax.axis('off') ax[1].annotate(r'$z$', xy=(3.375,260),xytext=(3.375, 260), xycoords='data', ha='right', va='top') #fig.subplots_adjust(wspace=0.03, top=0.7) #fig.subplots_adjust(right = 0.88, hspace=0.3) #fig.tight_layout() handles2, labels2 = ax[0].get_legend_handles_labels() handles, labels = ax[1].get_legend_handles_labels() handles.extend(handles2) labels.extend(labels2) fig.legend(handles, labels, bbox_to_anchor=(0.95,0.98), frameon = False, handlelength = 1) #fig.legend(handles, labels, loc='upper center', frameon = True, ncol=3, handlelength = 1) #plt.legend() #plt.show() plt.savefig('../plots/He_Graphene_Potential.pdf',transparent=True) plt.savefig('../plots/He_Graphene_Potential.png',transparent=True, dpi=300) ###Output _____no_output_____ ###Markdown Alternate: Combined final results for V and V' with interaction ###Code from scipy import interpolate func_V_He_MP2_interp = interpolate.interp1d(r_MP2, V_He_MP2, kind='cubic') r_MP2_interp = np.linspace(np.min(r_MP2),np.max(r_MP2),1000) V_He_MP2_interp = func_V_He_MP2_interp(r_MP2_interp) func_He_He_V_DFT = interpolate.interp1d(DFT_r, He_He_V_DFT,kind='cubic') DFT_r_interp = np.linspace(np.min(DFT_r),np.max(DFT_r),1000) He_He_V_DFT_interp = func_He_He_V_DFT(DFT_r_interp) ###Output _____no_output_____ ###Markdown Results for V and V' ###Code results = {} results['HF'] = [69.7 ,-2.08 ] results['QMC'] = [54.3, -2.76] results['DFT'] = [21.4, -1.36] results['MP2'] = [51.5 , -1.97] rvals = [np.sqrt(3)*1.42,3*1.42] col = {} col['HF'] = None col['QMC'] = colors[0] col['DFT'] = colors[2] col['MP2'] = colors[1] props = {} props['HF'] = {'mfc':'White', 'mec':colors[0], 'ms':3, 'label':'HF', 'marker':'^', 'mew':0.6,'zorder':-2, 'lw':0} props['QMC'] = {'mfc':colortools.get_alpha_hex(colors[0],0.8), 'mec':colors[0], 'ms':3, 'label':'QMC', 'marker':'o', 'mew':0.6,'zorder':2, 'lw':0} props['DFT'] = {'mfc':colortools.get_alpha_hex(colors[2],0.8), 'mec':colors[2], 'ms':3, 'label':'DFT', 'marker':'s', 'mew':0.6,'zorder':-2, 'lw':0} props['MP2'] = {'mfc':colortools.get_alpha_hex(colors[1],0.8), 'mec':colors[1], 'ms':3, 'label':'MP2', 'marker':'D', 'mew':0.6,'zorder':-2, 'lw':0} methods = ['HF','QMC','DFT','MP2'] aₒ = 1.42 from mpl_toolkits.axes_grid1.inset_locator import inset_axes factor = 1440/1388 fig, ax = plt.subplots(1, 1) ax.plot(QMC_r, szalewicz_V, color = colors[0], lw=1) ax.plot(DFT_r_interp, He_He_V_DFT_interp, color = colors[2], lw=1) ax.plot(r_MP2_interp, V_He_MP2_interp, color = colors[1], lw=1) ax.set_xlim(2.2, 3.1*aₒ) ax.set_ylim(-50,74) ax.set_xlabel('Separation' + r'$\,\,\alabel{r}{\angstrom}$') ax.set_ylabel('Int. Potential' + r'$\,\,\alabel{\mathcal{V}_{\rm He-He}}{\kelvin}$') ## Plot the raw data as points for method in methods: for i in range(2): ax.plot(rvals[i],results[method][i],**props[method]) handles, labels = ax.get_legend_handles_labels() axins1 = ax.inset_axes([.69, .04, .3, .25]) axins1.plot(QMC_r, szalewicz_V, color = colors[0], lw=1) axins1.plot(DFT_r_interp, He_He_V_DFT_interp, color = colors[2], lw=1) axins1.plot(r_MP2_interp, V_He_MP2_interp, color = colors[1], lw=1) for method in methods: axins1.plot(rvals[1],results[method][1],**props[method]) rmin,rmax = 3*aₒ-0.05,3*aₒ+0.05 axins1.set_xlim(rmin, rmax) axins1.set_ylim(-4.5,-0.25) axins1.set_xticklabels('') #axins1.set_yticklabels('') rec = ax.indicate_inset_zoom(axins1) ticks = [np.sqrt(3)*aₒ, 2*aₒ, np.sqrt(7)*aₒ, 3*aₒ] tick_labels = [r'$\sqrt{3}a_0$', r'$2a_0$',r'$\sqrt{7}a_0$',r'$3a_0$'] plt.xticks(ticks,tick_labels) ax.axvline(x=np.sqrt(3)*aₒ,ls='--', color='grey', lw=0.4, zorder=-10) ax.axvline(x=3*aₒ,ls='--', color='grey', lw=0.4, zorder=-10) axins1.axvline(x=3*aₒ,ls='--', color='grey', lw=0.4, zorder=-10) axins1.set_xticks([3*aₒ]) plt.legend(handles[::2],labels[::2], loc=(0.76,0.6)) im = plt.imread('../plots/V_Vp_He_graphene.png',format='png') newax = fig.add_axes([0.2, 0.4, 0.48, 0.48/factor]) newax.imshow(im,interpolation='none') newax.axis('off') ax.annotate(r'$V^\prime$', xy=(2.96,35.5),xytext=(2.96, 35.5), color='grey', xycoords='data', ha='right', va='top',fontsize=6) ax.annotate(r'$V$',xy=(3.05, 20.5), color='grey', zorder=20, xycoords='data', ha='right', va='top',fontsize=6) ax.annotate(r'$V$',xy=(3.05, 50.5), color='grey', zorder=20, xycoords='data', ha='right', va='top',fontsize=6) plt.savefig('../plots/He_Graphene_Potential_data.pdf',transparent=True) #plt.savefig('../plots/He_Graphene_Potential.png',transparent=True, dpi=300) ###Output _____no_output_____
Udemy/.ipynb_checkpoints/most_frequent-checkpoint.ipynb
###Markdown Common elements in two sorted arraysa function that retunrs the common elemnts (as an array) between two sorted of integersnotation O(max(n,m)) ###Code # Implement your function below. def common_elements(list1, list2): result = [] pointer1=0 pointer2=0 while pointer1<len(list1) and pointer2<len(list2): if list1[pointer1]==list2[pointer2]: result.append(list1[pointer1]) pointer1+=1 pointer2+=1 elif list1[pointer1]>list2[pointer2]: pointer2+=1 else: pointer1+=1 return result # NOTE: The following input values will be used for testing your solution. list_a1 = [1, 3, 4, 6, 7, 9] list_a2 = [1, 2, 4, 5, 9, 10] # common_elements(list_a1, list_a2) should return [1, 4, 9] (a list). list_b1 = [1, 2, 9, 10, 11, 12] list_b2 = [0, 1, 2, 3, 4, 5, 8, 9, 10, 12, 14, 15] # common_elements(list_b1, list_b2) should return [1, 2, 9, 10, 12] (a list). list_c1 = [0, 1, 2, 3, 4, 5] list_c2 = [6, 7, 8, 9, 10, 11] # common_elements(list_b1, list_b2) should return [] (an empty list). print(common_elements(list_a1,list_a2)) print(common_elements(list_b1,list_b2)) print(common_elements(list_c1,list_c2)) ###Output [1, 4, 9] [1, 2, 9, 10, 12] []
rel_ext_02_experiments.ipynb
###Markdown Relation extraction using distant supervision: Experiments ###Code __author__ = "Bill MacCartney" __version__ = "CS224U, Stanford, Spring 2019" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.556 Taluks 2.483 Córdoba 2.481 Valais ..... ..... -1.316 Cook -1.438 he -1.459 who Highest and lowest feature weights for relation author: 2.699 book 2.566 musical 2.507 books ..... ..... -2.791 1945 -2.885 17th -2.998 1818 Highest and lowest feature weights for relation capital: 3.700 capital 1.718 km 1.459 posted ..... ..... -1.165 southwestern -1.612 Dehradun -1.870 state Highest and lowest feature weights for relation contains: 2.288 bordered 2.119 Ontario 2.021 third-largest ..... ..... -2.347 Midlands -2.496 who -2.718 Mile Highest and lowest feature weights for relation film_performance: 4.404 alongside 4.049 starring 3.604 movie ..... ..... -1.578 poem -1.718 tragedy -1.756 or Highest and lowest feature weights for relation founders: 3.993 founded 3.865 founder 3.435 co-founder ..... ..... -1.587 band -1.673 novel -1.764 Bauhaus Highest and lowest feature weights for relation genre: 2.792 series 2.776 movie 2.635 album ..... ..... -1.326 's -1.410 and -1.664 at Highest and lowest feature weights for relation has_sibling: 5.362 brother 4.208 sister 2.790 Marlon ..... ..... -1.350 alongside -1.414 Her -1.999 formed Highest and lowest feature weights for relation has_spouse: 5.038 wife 4.283 widow 4.221 married ..... ..... -1.227 which -1.265 reported -1.298 Sir Highest and lowest feature weights for relation is_a: 2.789 2.692 order 2.467 philosopher ..... ..... -1.741 birds -3.094 cat -4.383 characin Highest and lowest feature weights for relation nationality: 2.932 born 1.859 leaving 1.839 Set ..... ..... -1.406 or -1.608 1961 -1.710 American Highest and lowest feature weights for relation parents: 4.626 daughter 4.525 father 4.495 son ..... ..... -1.487 defeated -1.524 Sonam -1.584 filmmaker Highest and lowest feature weights for relation place_of_birth: 3.997 born 3.004 birthplace 2.905 mayor ..... ..... -1.319 American -1.412 or -1.507 and Highest and lowest feature weights for relation place_of_death: 2.330 died 1.821 where 1.660 living ..... ..... -1.225 as -1.232 and -1.283 created Highest and lowest feature weights for relation profession: 3.338 2.538 philosopher 2.377 American ..... ..... -1.298 Texas -1.302 in -1.972 on Highest and lowest feature weights for relation worked_at: 3.077 CEO 2.922 professor 2.818 employee ..... ..... -1.406 bassist -1.684 family -1.730 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='adjoins', sbj='Lahore', obj='Pakistan') 1.000 KBTriple(rel='adjoins', sbj='Sicily', obj='Italy') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Sicily') 1.000 KBTriple(rel='adjoins', sbj='Great_Britain', obj='Europe') 1.000 KBTriple(rel='adjoins', sbj='Europe', obj='Great_Britain') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='Brave_New_World') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj='Brave_New_World', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='Oliver_Twist') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='capital', sbj='Lahore', obj='Pakistan') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='contains', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='contains', sbj='Tenerife', obj='Canary_Islands') 1.000 KBTriple(rel='contains', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Melbourne', obj='Australia') 1.000 KBTriple(rel='contains', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='Edmonton', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai') 1.000 KBTriple(rel='film_performance', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation genre: 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 0.997 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.989 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.989 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.986 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.986 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.953 KBTriple(rel='genre', sbj='Ronald_Reagan', obj='Jurassic_Park_III') 0.953 KBTriple(rel='genre', sbj='Jurassic_Park_III', obj='Ronald_Reagan') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Aretha_Franklin', obj='Dionne_Warwick') 1.000 KBTriple(rel='has_sibling', sbj='Dionne_Warwick', obj='Aretha_Franklin') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') 1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae') 1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Suryavarman_II') 1.000 KBTriple(rel='nationality', sbj='Suryavarman_II', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='nationality', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) print(featurized[0]['is_a'].shape) print(featurized[0]['capital'].shape) ###Output (27282, 34564) (25262, 34564) ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) train_result.keys() ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.856 0.396 0.695 407 7057 author 0.817 0.522 0.734 657 7307 capital 0.722 0.206 0.481 126 6776 contains 0.791 0.606 0.746 4487 11137 film_performance 0.798 0.598 0.748 984 7634 founders 0.802 0.422 0.679 469 7119 genre 0.600 0.161 0.388 205 6855 has_sibling 0.881 0.250 0.585 625 7275 has_spouse 0.882 0.326 0.658 754 7404 is_a 0.676 0.230 0.487 618 7268 nationality 0.624 0.163 0.399 386 7036 parents 0.852 0.518 0.755 390 7040 place_of_birth 0.640 0.202 0.447 282 6932 place_of_death 0.667 0.096 0.304 209 6859 profession 0.600 0.205 0.433 308 6958 worked_at 0.733 0.254 0.533 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.746 0.322 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.556 Córdoba 2.332 Taluks 2.253 Valais ..... ..... -1.341 capital -1.593 Cook -1.614 America Highest and lowest feature weights for relation author: 3.011 author 2.753 books 2.465 short ..... ..... -2.149 or -2.198 no -3.970 1945 Highest and lowest feature weights for relation capital: 2.679 capital 1.822 especially 1.692 posted ..... ..... -1.789 Madras -1.901 Province -1.906 Isfahan Highest and lowest feature weights for relation contains: 2.618 third-largest 2.506 bordered 2.444 attended ..... ..... -2.281 film -2.306 who -2.335 any Highest and lowest feature weights for relation film_performance: 3.935 starring 3.587 alongside 3.516 opposite ..... ..... -1.750 [ -1.785 members -1.792 comedian Highest and lowest feature weights for relation founders: 4.209 founder 3.982 founded 3.731 co-founder ..... ..... -1.419 novel -1.447 state -1.897 writing Highest and lowest feature weights for relation genre: 3.103 2.633 movie 2.632 series ..... ..... -1.330 ; -1.418 and -1.758 at Highest and lowest feature weights for relation has_sibling: 4.933 brother 4.297 sister 2.904 Marlon ..... ..... -1.313 starring -1.589 fifteen-year-old -1.650 Her Highest and lowest feature weights for relation has_spouse: 5.258 wife 4.354 married 4.320 widow ..... ..... -1.306 team -1.508 In -1.974 unfaithful Highest and lowest feature weights for relation is_a: 3.613 2.889 family 2.252 Family ..... ..... -1.957 birds -3.124 Bombus -3.228 widespread Highest and lowest feature weights for relation nationality: 2.864 born 1.918 Prince 1.857 leaving ..... ..... -1.489 writing -1.596 2010 -1.859 state Highest and lowest feature weights for relation parents: 4.687 son 4.663 daughter 4.023 father ..... ..... -2.126 dead -2.205 Kelly -2.483 passes Highest and lowest feature weights for relation place_of_birth: 3.783 born 2.929 birthplace 2.821 mayor ..... ..... -1.502 and -1.713 state -2.103 Oldham Highest and lowest feature weights for relation place_of_death: 2.370 died 1.882 where 1.867 Emperor ..... ..... -1.250 and -1.381 state -1.498 ” Highest and lowest feature weights for relation profession: 4.148 2.459 American 2.406 English ..... ..... -1.400 Texas -1.457 elder -2.086 on Highest and lowest feature weights for relation worked_at: 3.124 professor 2.795 president 2.719 head ..... ..... -1.857 ” -1.897 supercomputing -2.467 father ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Blue_Ridge_Mountains', obj='Appalachian_Mountains') 1.000 KBTriple(rel='adjoins', sbj='Appalachian_Mountains', obj='Blue_Ridge_Mountains') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Palermo') 1.000 KBTriple(rel='adjoins', sbj='Palermo', obj='Italy') 1.000 KBTriple(rel='adjoins', sbj='United_Kingdom', obj='England') 1.000 KBTriple(rel='adjoins', sbj='England', obj='United_Kingdom') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Dante_Alighieri', obj='Divine_Comedy') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Allen_Ginsberg', obj='Howl') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='Howl', obj='Allen_Ginsberg') 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj='Harriet_Beecher_Stowe', obj="Uncle_Tom's_Cabin") Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='Canada', obj='Edmonton') 1.000 KBTriple(rel='contains', sbj='Philippines', obj='Palawan') 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='Tenerife', obj='Canary_Islands') 1.000 KBTriple(rel='contains', sbj='Dhaka', obj='Bangladesh') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='Marvel_Comics', obj='Sam_Raimi') 1.000 KBTriple(rel='film_performance', sbj='Sam_Raimi', obj='Marvel_Comics') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 1.000 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.998 KBTriple(rel='genre', sbj='Musician', obj='Multi-instrumentalist') 0.998 KBTriple(rel='genre', sbj='Multi-instrumentalist', obj='Musician') 0.989 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.989 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') 0.986 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.986 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Sharmila_Tagore', obj='Saif_Ali_Khan') 1.000 KBTriple(rel='has_sibling', sbj='Saif_Ali_Khan', obj='Sharmila_Tagore') 1.000 KBTriple(rel='has_sibling', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_sibling', sbj='Louis_Chevrolet', obj='William_C._Durant') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus') 1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='nationality', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='nationality', sbj='Pol_Pot', obj='Cambodia') 0.999 KBTriple(rel='nationality', sbj='Cambodia', obj='Pol_Pot') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: Experiments ###Code __author__ = "Bill MacCartney" __version__ = "CS224U, Stanford, Spring 2019" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a data split on which to train, a list of featurizers, and model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a test split, a list of featurizers, the vectorizer that was used during training, and a dictionary holding the models, one per relation. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, assess_y = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, assess_y) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.556 Taluks 2.483 Córdoba 2.481 Valais ..... ..... -1.316 Cook -1.438 he -1.459 who Highest and lowest feature weights for relation author: 2.699 book 2.566 musical 2.507 books ..... ..... -2.791 1945 -2.885 17th -2.998 1818 Highest and lowest feature weights for relation capital: 3.700 capital 1.718 km 1.459 posted ..... ..... -1.165 southwestern -1.612 Dehradun -1.870 state Highest and lowest feature weights for relation contains: 2.288 bordered 2.119 Ontario 2.021 third-largest ..... ..... -2.347 Midlands -2.496 who -2.718 Mile Highest and lowest feature weights for relation film_performance: 4.404 alongside 4.049 starring 3.604 movie ..... ..... -1.578 poem -1.718 tragedy -1.756 or Highest and lowest feature weights for relation founders: 3.993 founded 3.865 founder 3.435 co-founder ..... ..... -1.587 band -1.673 novel -1.764 Bauhaus Highest and lowest feature weights for relation genre: 2.792 series 2.776 movie 2.635 album ..... ..... -1.326 's -1.410 and -1.664 at Highest and lowest feature weights for relation has_sibling: 5.362 brother 4.208 sister 2.790 Marlon ..... ..... -1.350 alongside -1.414 Her -1.999 formed Highest and lowest feature weights for relation has_spouse: 5.038 wife 4.283 widow 4.221 married ..... ..... -1.227 which -1.265 reported -1.298 Sir Highest and lowest feature weights for relation is_a: 2.789 2.692 order 2.467 philosopher ..... ..... -1.741 birds -3.094 cat -4.383 characin Highest and lowest feature weights for relation nationality: 2.932 born 1.859 leaving 1.839 Set ..... ..... -1.406 or -1.608 1961 -1.710 American Highest and lowest feature weights for relation parents: 4.626 daughter 4.525 father 4.495 son ..... ..... -1.487 defeated -1.524 Sonam -1.584 filmmaker Highest and lowest feature weights for relation place_of_birth: 3.997 born 3.004 birthplace 2.905 mayor ..... ..... -1.319 American -1.412 or -1.507 and Highest and lowest feature weights for relation place_of_death: 2.330 died 1.821 where 1.660 living ..... ..... -1.225 as -1.232 and -1.283 created Highest and lowest feature weights for relation profession: 3.338 2.538 philosopher 2.377 American ..... ..... -1.298 Texas -1.302 in -1.972 on Highest and lowest feature weights for relation worked_at: 3.077 CEO 2.922 professor 2.818 employee ..... ..... -1.406 bassist -1.684 family -1.730 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evalute this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='adjoins', sbj='Lahore', obj='Pakistan') 1.000 KBTriple(rel='adjoins', sbj='Sicily', obj='Italy') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Sicily') 1.000 KBTriple(rel='adjoins', sbj='Great_Britain', obj='Europe') 1.000 KBTriple(rel='adjoins', sbj='Europe', obj='Great_Britain') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='Brave_New_World') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj='Brave_New_World', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='Oliver_Twist') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='capital', sbj='Lahore', obj='Pakistan') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='contains', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='contains', sbj='Tenerife', obj='Canary_Islands') 1.000 KBTriple(rel='contains', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Melbourne', obj='Australia') 1.000 KBTriple(rel='contains', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='Edmonton', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai') 1.000 KBTriple(rel='film_performance', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation genre: 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 0.997 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.989 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.989 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.986 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.986 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.953 KBTriple(rel='genre', sbj='Ronald_Reagan', obj='Jurassic_Park_III') 0.953 KBTriple(rel='genre', sbj='Jurassic_Park_III', obj='Ronald_Reagan') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Aretha_Franklin', obj='Dionne_Warwick') 1.000 KBTriple(rel='has_sibling', sbj='Dionne_Warwick', obj='Aretha_Franklin') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') 1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae') 1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Suryavarman_II') 1.000 KBTriple(rel='nationality', sbj='Suryavarman_II', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='nationality', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2022" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire') 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn') 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein') 1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein') Highest probability examples for relation place_of_birth: 1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow') 0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone') 0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal') 0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan') 0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu') 0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor') 0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio') Highest probability examples for relation place_of_death: 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan') 1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire') Highest probability examples for relation profession: 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott') 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women') 0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza') 0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley') 0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera') 0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter') 0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor') Highest probability examples for relation worked_at: 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book') ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Fall 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2021" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle corpus.get_examples_for_entities(kbt.sbj, kbt.obj) simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.814 0.534 0.737 657 7307 capital 0.679 0.286 0.533 126 6776 contains 0.781 0.605 0.738 4487 11137 film_performance 0.805 0.596 0.752 984 7634 founders 0.789 0.407 0.665 469 7119 genre 0.558 0.141 0.351 205 6855 has_sibling 0.880 0.258 0.593 625 7275 has_spouse 0.890 0.345 0.676 754 7404 is_a 0.691 0.217 0.481 618 7268 nationality 0.589 0.163 0.387 386 7036 parents 0.865 0.541 0.772 390 7040 place_of_birth 0.641 0.209 0.454 282 6932 place_of_death 0.457 0.100 0.267 209 6859 profession 0.619 0.169 0.404 308 6958 worked_at 0.730 0.267 0.542 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.729 0.327 0.566 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.814 0.534 0.737 657 7307 capital 0.679 0.286 0.533 126 6776 contains 0.781 0.605 0.738 4487 11137 film_performance 0.805 0.596 0.752 984 7634 founders 0.789 0.407 0.665 469 7119 genre 0.558 0.141 0.351 205 6855 has_sibling 0.880 0.258 0.593 625 7275 has_spouse 0.890 0.345 0.676 754 7404 is_a 0.691 0.217 0.481 618 7268 nationality 0.589 0.163 0.387 386 7036 parents 0.865 0.541 0.772 390 7040 place_of_birth 0.641 0.209 0.454 282 6932 place_of_death 0.457 0.100 0.267 209 6859 profession 0.619 0.169 0.404 308 6958 worked_at 0.730 0.267 0.542 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.729 0.327 0.566 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire') 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn') 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein') 1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein') Highest probability examples for relation place_of_birth: 1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow') 0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone') 0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal') 0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan') 0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu') 0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor') 0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio') Highest probability examples for relation place_of_death: 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan') 1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire') Highest probability examples for relation profession: 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott') 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women') 0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza') 0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley') 0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera') 0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter') 0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor') Highest probability examples for relation worked_at: 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book') ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.877 0.386 0.699 407 7057 author 0.810 0.519 0.728 657 7307 capital 0.652 0.238 0.484 126 6776 contains 0.778 0.605 0.736 4487 11137 film_performance 0.782 0.597 0.736 984 7634 founders 0.822 0.414 0.686 469 7119 genre 0.517 0.151 0.348 205 6855 has_sibling 0.858 0.251 0.578 625 7275 has_spouse 0.892 0.338 0.672 754 7404 is_a 0.705 0.217 0.486 618 7268 nationality 0.578 0.192 0.412 386 7036 parents 0.827 0.538 0.747 390 7040 place_of_birth 0.558 0.206 0.415 282 6932 place_of_death 0.415 0.105 0.261 209 6859 profession 0.659 0.188 0.439 308 6958 worked_at 0.705 0.261 0.526 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.715 0.325 0.560 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.556 Taluks 2.483 Córdoba 2.481 Valais ..... ..... -1.316 Cook -1.438 he -1.459 who Highest and lowest feature weights for relation author: 2.699 book 2.566 musical 2.507 books ..... ..... -2.791 1945 -2.885 17th -2.998 1818 Highest and lowest feature weights for relation capital: 3.700 capital 1.718 km 1.459 posted ..... ..... -1.165 southwestern -1.612 Dehradun -1.870 state Highest and lowest feature weights for relation contains: 2.288 bordered 2.119 Ontario 2.021 third-largest ..... ..... -2.347 Midlands -2.496 who -2.718 Mile Highest and lowest feature weights for relation film_performance: 4.404 alongside 4.049 starring 3.604 movie ..... ..... -1.578 poem -1.718 tragedy -1.756 or Highest and lowest feature weights for relation founders: 3.993 founded 3.865 founder 3.435 co-founder ..... ..... -1.587 band -1.673 novel -1.764 Bauhaus Highest and lowest feature weights for relation genre: 2.792 series 2.776 movie 2.635 album ..... ..... -1.326 's -1.410 and -1.664 at Highest and lowest feature weights for relation has_sibling: 5.362 brother 4.208 sister 2.790 Marlon ..... ..... -1.350 alongside -1.414 Her -1.999 formed Highest and lowest feature weights for relation has_spouse: 5.038 wife 4.283 widow 4.221 married ..... ..... -1.227 which -1.265 reported -1.298 Sir Highest and lowest feature weights for relation is_a: 2.789 2.692 order 2.467 philosopher ..... ..... -1.741 birds -3.094 cat -4.383 characin Highest and lowest feature weights for relation nationality: 2.932 born 1.859 leaving 1.839 Set ..... ..... -1.406 or -1.608 1961 -1.710 American Highest and lowest feature weights for relation parents: 4.626 daughter 4.525 father 4.495 son ..... ..... -1.487 defeated -1.524 Sonam -1.584 filmmaker Highest and lowest feature weights for relation place_of_birth: 3.997 born 3.004 birthplace 2.905 mayor ..... ..... -1.319 American -1.412 or -1.507 and Highest and lowest feature weights for relation place_of_death: 2.330 died 1.821 where 1.660 living ..... ..... -1.225 as -1.232 and -1.283 created Highest and lowest feature weights for relation profession: 3.338 2.538 philosopher 2.377 American ..... ..... -1.298 Texas -1.302 in -1.972 on Highest and lowest feature weights for relation worked_at: 3.077 CEO 2.922 professor 2.818 employee ..... ..... -1.406 bassist -1.684 family -1.730 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='adjoins', sbj='Lahore', obj='Pakistan') 1.000 KBTriple(rel='adjoins', sbj='Sicily', obj='Italy') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Sicily') 1.000 KBTriple(rel='adjoins', sbj='Great_Britain', obj='Europe') 1.000 KBTriple(rel='adjoins', sbj='Europe', obj='Great_Britain') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='Brave_New_World') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj='Brave_New_World', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='Oliver_Twist') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='capital', sbj='Lahore', obj='Pakistan') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='contains', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='contains', sbj='Tenerife', obj='Canary_Islands') 1.000 KBTriple(rel='contains', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Melbourne', obj='Australia') 1.000 KBTriple(rel='contains', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='Edmonton', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai') 1.000 KBTriple(rel='film_performance', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation genre: 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 0.997 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.989 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.989 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.986 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.986 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.953 KBTriple(rel='genre', sbj='Ronald_Reagan', obj='Jurassic_Park_III') 0.953 KBTriple(rel='genre', sbj='Jurassic_Park_III', obj='Ronald_Reagan') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Aretha_Franklin', obj='Dionne_Warwick') 1.000 KBTriple(rel='has_sibling', sbj='Dionne_Warwick', obj='Aretha_Franklin') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') 1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae') 1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Suryavarman_II') 1.000 KBTriple(rel='nationality', sbj='Suryavarman_II', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='nationality', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2021" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire') 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn') 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein') 1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein') Highest probability examples for relation place_of_birth: 1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow') 0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone') 0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal') 0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan') 0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu') 0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor') 0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio') Highest probability examples for relation place_of_death: 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan') 1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire') Highest probability examples for relation profession: 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott') 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women') 0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza') 0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley') 0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera') 0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter') 0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor') Highest probability examples for relation worked_at: 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book') ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2021" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire') 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn') 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein') 1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein') Highest probability examples for relation place_of_birth: 1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow') 0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone') 0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal') 0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan') 0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu') 0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor') 0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio') Highest probability examples for relation place_of_death: 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan') 1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire') Highest probability examples for relation profession: 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott') 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women') 0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza') 0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley') 0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera') 0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter') 0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor') Highest probability examples for relation worked_at: 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book') ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Fall 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) print(kbts_by_rel['adjoins'][:10]) print(f"True: {sum(labels_by_rel['adjoins'])} ... {len(labels_by_rel['adjoins'])}") print(featurized[0].keys()) #print(featurized[0]['adjoins']) #print(featurized[1]) ###Output [KBTriple(rel='adjoins', sbj='France', obj='Spain'), KBTriple(rel='adjoins', sbj='Thailand', obj='Laos'), KBTriple(rel='adjoins', sbj='Alberta', obj='Northwest_Territories'), KBTriple(rel='adjoins', sbj='County_Kilkenny', obj='County_Laois'), KBTriple(rel='adjoins', sbj='Tianjin', obj='Hebei'), KBTriple(rel='adjoins', sbj='Bavaria', obj='Thuringia'), KBTriple(rel='adjoins', sbj='Hispaniola', obj='Cuba'), KBTriple(rel='adjoins', sbj='Libya', obj='Egypt'), KBTriple(rel='adjoins', sbj='Solano_County', obj='Contra_Costa_County'), KBTriple(rel='adjoins', sbj='Jordan', obj='Saudi_Arabia')] True: 1702 ... 26442 dict_keys(['adjoins', 'author', 'capital', 'contains', 'film_performance', 'founders', 'genre', 'has_sibling', 'has_spouse', 'is_a', 'nationality', 'parents', 'place_of_birth', 'place_of_death', 'profession', 'worked_at']) ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) print(train_result.keys()) print(train_result['vectorizer']) ###Output dict_keys(['featurizers', 'vectorizer', 'models', 'all_relations', 'vectorize']) DictVectorizer() ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') print(predictions) ###Output {'adjoins': array([ True, True, False, ..., False, False, False]), 'author': array([False, False, False, ..., False, False, False]), 'capital': array([False, True, False, ..., False, False, False]), 'contains': array([False, False, True, ..., False, False, False]), 'film_performance': array([False, False, True, ..., False, False, False]), 'founders': array([False, False, True, ..., False, False, False]), 'genre': array([False, False, True, ..., False, False, False]), 'has_sibling': array([False, False, False, ..., False, False, False]), 'has_spouse': array([ True, False, True, ..., False, False, False]), 'is_a': array([False, False, False, ..., False, False, False]), 'nationality': array([False, False, False, ..., False, False, False]), 'parents': array([False, False, False, ..., False, False, False]), 'place_of_birth': array([False, False, False, ..., False, False, False]), 'place_of_death': array([False, False, False, ..., False, False, False]), 'profession': array([False, True, True, ..., False, False, False]), 'worked_at': array([ True, False, False, ..., False, False, False])} ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.820 0.403 0.679 407 7057 author 0.827 0.501 0.731 657 7307 capital 0.565 0.206 0.419 126 6776 contains 0.792 0.600 0.744 4487 11137 film_performance 0.812 0.588 0.755 984 7634 founders 0.816 0.416 0.684 469 7119 genre 0.577 0.146 0.363 205 6855 has_sibling 0.872 0.251 0.584 625 7275 has_spouse 0.901 0.340 0.677 754 7404 is_a 0.725 0.214 0.490 618 7268 nationality 0.616 0.179 0.414 386 7036 parents 0.870 0.531 0.771 390 7040 place_of_birth 0.641 0.209 0.454 282 6932 place_of_death 0.500 0.096 0.271 209 6859 profession 0.659 0.195 0.446 308 6958 worked_at 0.683 0.271 0.524 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.730 0.322 0.563 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.820 0.403 0.679 407 7057 author 0.827 0.501 0.731 657 7307 capital 0.565 0.206 0.419 126 6776 contains 0.792 0.600 0.744 4487 11137 film_performance 0.812 0.588 0.755 984 7634 founders 0.816 0.416 0.684 469 7119 genre 0.577 0.146 0.363 205 6855 has_sibling 0.872 0.251 0.584 625 7275 has_spouse 0.901 0.340 0.677 754 7404 is_a 0.725 0.214 0.490 618 7268 nationality 0.616 0.179 0.414 386 7036 parents 0.870 0.531 0.771 390 7040 place_of_birth 0.641 0.209 0.454 282 6932 place_of_death 0.500 0.096 0.271 209 6859 profession 0.659 0.195 0.446 308 6958 worked_at 0.683 0.271 0.524 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.730 0.322 0.563 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire') 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn') 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein') 1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein') Highest probability examples for relation place_of_birth: 1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow') 0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone') 0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal') 0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan') 0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu') 0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor') 0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio') Highest probability examples for relation place_of_death: 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan') 1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire') Highest probability examples for relation profession: 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott') 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women') 0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza') 0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley') 0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera') 0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera') 0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter') 0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor') Highest probability examples for relation worked_at: 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book') ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Fall 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt ex = corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0] ex ex.middle " ".join((ex.left, ex.entity_1, ex.middle, ex.entity_2, ex.right)) simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) print(type(kbts_by_rel)) print(kbts_by_rel.keys()) kbts_by_rel['profession'][0] labels_by_rel.keys() labels_by_rel['profession'][0] ###Output _____no_output_____ ###Markdown We see that kbts_by_rel and labels_by_rel are default dict with keys the relations considered. kbts_by_rel values is a list of kbt and labels_by_rel is a list with corresponding labels(True or False) ###Code print(type(featurized)) print('Featurized is a tuple of size {}'.format(len(featurized))) print(type(featurized[0])) print(featurized[0]['profession'].shape) featurized[0]['profession'] featurized[1] ###Output _____no_output_____ ###Markdown featurized is a 2-tuple, which 1st element is a defaultdict which keys are the relations considered and values feature matrices. These matrices are obtained by using Scikitlearn DictVectorizer. The matrix columns are the words found in the corpus, the rows are the kbt and values are the count of the words found for kbt in corpus according to the rule set in the list of featurizers. ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.855 0.391 0.691 407 7057 author 0.780 0.528 0.712 657 7307 capital 0.617 0.230 0.462 126 6776 contains 0.783 0.595 0.737 4487 11137 film_performance 0.782 0.595 0.736 984 7634 founders 0.807 0.409 0.676 469 7119 genre 0.582 0.156 0.376 205 6855 has_sibling 0.825 0.250 0.565 625 7275 has_spouse 0.852 0.337 0.653 754 7404 is_a 0.668 0.238 0.491 618 7268 nationality 0.570 0.148 0.363 386 7036 parents 0.858 0.544 0.769 390 7040 place_of_birth 0.699 0.206 0.472 282 6932 place_of_death 0.568 0.100 0.294 209 6859 profession 0.593 0.208 0.432 308 6958 worked_at 0.712 0.261 0.529 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.722 0.325 0.560 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.855 0.391 0.691 407 7057 author 0.780 0.528 0.712 657 7307 capital 0.617 0.230 0.462 126 6776 contains 0.783 0.595 0.737 4487 11137 film_performance 0.782 0.595 0.736 984 7634 founders 0.807 0.409 0.676 469 7119 genre 0.582 0.156 0.376 205 6855 has_sibling 0.825 0.250 0.565 625 7275 has_spouse 0.852 0.337 0.653 754 7404 is_a 0.668 0.238 0.491 618 7268 nationality 0.570 0.148 0.363 386 7036 parents 0.858 0.544 0.769 390 7040 place_of_birth 0.699 0.206 0.472 282 6932 place_of_death 0.568 0.100 0.294 209 6859 profession 0.593 0.208 0.432 308 6958 worked_at 0.712 0.261 0.529 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.722 0.325 0.560 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) len(kbts_by_rel['adjoins']) featurized[0] featurized[1] ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.821 0.383 0.668 407 7057 author 0.809 0.530 0.732 657 7307 capital 0.659 0.214 0.466 126 6776 contains 0.787 0.601 0.741 4487 11137 film_performance 0.815 0.586 0.756 984 7634 founders 0.793 0.401 0.663 469 7119 genre 0.547 0.171 0.380 205 6855 has_sibling 0.839 0.259 0.580 625 7275 has_spouse 0.849 0.342 0.655 754 7404 is_a 0.634 0.233 0.472 618 7268 nationality 0.577 0.145 0.362 386 7036 parents 0.829 0.533 0.746 390 7040 place_of_birth 0.640 0.202 0.447 282 6932 place_of_death 0.488 0.100 0.276 209 6859 profession 0.582 0.208 0.428 308 6958 worked_at 0.692 0.267 0.525 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.710 0.324 0.556 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.821 0.383 0.668 407 7057 author 0.809 0.530 0.732 657 7307 capital 0.659 0.214 0.466 126 6776 contains 0.787 0.601 0.741 4487 11137 film_performance 0.815 0.586 0.756 984 7634 founders 0.793 0.401 0.663 469 7119 genre 0.547 0.171 0.380 205 6855 has_sibling 0.839 0.259 0.580 625 7275 has_spouse 0.849 0.342 0.655 754 7404 is_a 0.634 0.233 0.472 618 7268 nationality 0.577 0.145 0.362 386 7036 parents 0.829 0.533 0.746 390 7040 place_of_birth 0.640 0.202 0.447 282 6932 place_of_death 0.488 0.100 0.276 209 6859 profession 0.582 0.208 0.428 308 6958 worked_at 0.692 0.267 0.525 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.710 0.324 0.556 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Fall 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an `sklearn` classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.832 0.378 0.671 407 7057 author 0.779 0.525 0.710 657 7307 capital 0.638 0.294 0.517 126 6776 contains 0.783 0.608 0.740 4487 11137 film_performance 0.796 0.591 0.745 984 7634 founders 0.783 0.384 0.648 469 7119 genre 0.654 0.166 0.412 205 6855 has_sibling 0.865 0.246 0.576 625 7275 has_spouse 0.878 0.342 0.668 754 7404 is_a 0.731 0.238 0.517 618 7268 nationality 0.555 0.171 0.383 386 7036 parents 0.862 0.544 0.771 390 7040 place_of_birth 0.637 0.206 0.449 282 6932 place_of_death 0.512 0.100 0.282 209 6859 profession 0.716 0.205 0.477 308 6958 worked_at 0.688 0.254 0.513 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.732 0.328 0.567 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.511 Córdoba 2.467 Taluks 2.434 Valais ..... ..... -1.143 for -1.186 Egypt -1.277 America Highest and lowest feature weights for relation author: 3.055 author 3.032 books 2.342 by ..... ..... -2.002 directed -2.019 or -2.211 poetry Highest and lowest feature weights for relation capital: 3.922 capital 2.163 especially 2.155 city ..... ..... -1.238 and -1.263 being -1.959 borough Highest and lowest feature weights for relation contains: 2.768 bordered 2.716 third-largest 2.219 tiny ..... ..... -3.502 Midlands -3.954 Siege -3.969 destroyed Highest and lowest feature weights for relation film_performance: 4.004 starring 3.731 alongside 3.199 opposite ..... ..... -1.702 then -1.840 She -1.889 Genghis Highest and lowest feature weights for relation founders: 3.677 founded 3.276 founder 2.779 label ..... ..... -1.795 William -1.850 Griffith -1.854 Wilson Highest and lowest feature weights for relation genre: 3.092 series 2.800 game 2.622 album ..... ..... -1.296 animated -1.434 and -1.949 at Highest and lowest feature weights for relation has_sibling: 5.196 brother 3.933 sister 2.747 nephew ..... ..... -1.293 ' -1.312 from -1.437 including Highest and lowest feature weights for relation has_spouse: 5.319 wife 4.652 married 4.617 husband ..... ..... -1.528 between -1.559 MTV -1.599 Terri Highest and lowest feature weights for relation is_a: 3.182 family 2.898 philosopher 2.623 ..... ..... -1.411 now -1.441 beans -1.618 at Highest and lowest feature weights for relation nationality: 2.887 born 1.933 president 1.843 caliph ..... ..... -1.467 or -1.540 ; -1.729 American Highest and lowest feature weights for relation parents: 5.108 son 4.437 father 4.400 daughter ..... ..... -1.053 a -1.070 England -1.210 in Highest and lowest feature weights for relation place_of_birth: 3.980 born 2.843 birthplace 2.702 mayor ..... ..... -1.276 Mughal -1.392 or -1.426 and Highest and lowest feature weights for relation place_of_death: 2.161 assassinated 2.027 died 1.837 Germany ..... ..... -1.246 ; -1.256 as -1.474 Siege Highest and lowest feature weights for relation profession: 3.148 2.727 American 2.635 philosopher ..... ..... -1.212 at -1.348 in -1.986 on Highest and lowest feature weights for relation worked_at: 3.107 president 2.913 head 2.743 professor ..... ..... -1.134 province -1.150 author -1.714 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are *true* but *absent from the KB*, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai') 1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice') 1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen') 1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe') 1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java') 1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Delhi', obj='India') 1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='contains', sbj='India', obj='Delhi') 1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee') 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics') Highest probability examples for relation genre: 1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens') 1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist') 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize') 0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great') 1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') Highest probability examples for relation parents: ###Markdown Relation extraction using distant supervision: experiments ###Code __author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020" ###Output _____no_output_____ ###Markdown Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Building a classifier](Building-a-classifier) 1. [Featurizers](Featurizers) 1. [Experiments](Experiments)1. [Analysis](Analysis) 1. [Examining the trained models](Examining-the-trained-models) 1. [Discovering new relation instances](Discovering-new-relation-instances) OverviewOK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions. ###Code from collections import Counter import os import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') ###Output _____no_output_____ ###Markdown With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb). ###Code corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) dataset = rel_ext.Dataset(corpus, kb) ###Output _____no_output_____ ###Markdown The following code splits up our data in a way that supports experimentation: ###Code splits = dataset.build_splits() splits ###Output _____no_output_____ ###Markdown Building a classifier FeaturizersFeaturizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples. ###Code def simple_bag_of_words_featurizer(kbt, corpus, feature_counter): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): for word in ex.middle.split(' '): feature_counter[word] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): for word in ex.middle.split(' '): feature_counter[word] += 1 return feature_counter ###Output _____no_output_____ ###Markdown Here's how this featurizer works on a single example: ###Code kbt = kb.kb_triples[0] kbt corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter()) ###Output _____no_output_____ ###Markdown You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: ###Code kbts_by_rel, labels_by_rel = dataset.build_dataset() featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer]) kbts_by_rel['capital'][-10:] ###Output _____no_output_____ ###Markdown ExperimentsNow we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation. ###Code train_result = rel_ext.train_models( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output _____no_output_____ ###Markdown Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction). ###Code predictions, true_labels = rel_ext.predict( splits, train_result, split_name='dev') ###Output _____no_output_____ ###Markdown Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score. ###Code rel_ext.evaluate_predictions(predictions, true_labels) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.860 0.364 0.676 407 7057 author 0.800 0.505 0.716 657 7307 capital 0.653 0.254 0.497 126 6776 contains 0.780 0.605 0.738 4487 11137 film_performance 0.794 0.568 0.736 984 7634 founders 0.803 0.407 0.672 469 7119 genre 0.623 0.161 0.396 205 6855 has_sibling 0.842 0.246 0.567 625 7275 has_spouse 0.864 0.345 0.664 754 7404 is_a 0.649 0.233 0.478 618 7268 nationality 0.622 0.192 0.429 386 7036 parents 0.860 0.536 0.767 390 7040 place_of_birth 0.663 0.209 0.462 282 6932 place_of_death 0.611 0.105 0.312 209 6859 profession 0.588 0.162 0.386 308 6958 worked_at 0.741 0.264 0.544 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.735 0.322 0.565 11210 117610 ###Markdown Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result. Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models. ###Code _ = rel_ext.experiment( splits, featurizers=[simple_bag_of_words_featurizer]) ###Output relation precision recall f-score support size ------------------ --------- --------- --------- --------- --------- adjoins 0.860 0.364 0.676 407 7057 author 0.800 0.505 0.716 657 7307 capital 0.653 0.254 0.497 126 6776 contains 0.780 0.605 0.738 4487 11137 film_performance 0.794 0.568 0.736 984 7634 founders 0.803 0.407 0.672 469 7119 genre 0.623 0.161 0.396 205 6855 has_sibling 0.842 0.246 0.567 625 7275 has_spouse 0.864 0.345 0.664 754 7404 is_a 0.649 0.233 0.478 618 7268 nationality 0.622 0.192 0.429 386 7036 parents 0.860 0.536 0.767 390 7040 place_of_birth 0.663 0.209 0.462 282 6932 place_of_death 0.611 0.105 0.312 209 6859 profession 0.588 0.162 0.386 308 6958 worked_at 0.741 0.264 0.544 303 6953 ------------------ --------- --------- --------- --------- --------- macro-average 0.735 0.322 0.565 11210 117610 ###Markdown Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynbA-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down. Analysis Examining the trained modelsOne important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators? ###Code rel_ext.examine_model_weights(train_result) ###Output Highest and lowest feature weights for relation adjoins: 2.520 Córdoba 2.498 Taluks 2.436 Valais ..... ..... -1.175 century -1.385 he -1.407 who Highest and lowest feature weights for relation author: 2.700 books 2.540 writer 2.507 wrote ..... ..... -2.091 famed -3.137 1818 -5.516 dystopian Highest and lowest feature weights for relation capital: 3.431 capital 1.988 especially 1.911 city ..... ..... -1.096 and -1.220 century -1.355 Westminster Highest and lowest feature weights for relation contains: 2.889 bordered 2.083 districts 2.081 third-largest ..... ..... -2.430 film -2.453 Brooklyn -3.098 6th Highest and lowest feature weights for relation film_performance: 3.976 starring 3.859 opposite 3.402 movie ..... ..... -2.054 Tamil -2.212 Iruvar -3.489 Mohabbatein Highest and lowest feature weights for relation founders: 4.076 founder 3.540 co-founder 3.510 founded ..... ..... -1.699 philosopher -1.706 band -1.813 novel Highest and lowest feature weights for relation genre: 3.291 series 2.745 game 2.384 movie ..... ..... -1.462 and -1.709 animated -1.773 at Highest and lowest feature weights for relation has_sibling: 5.261 brother 3.839 sister 2.919 nephew ..... ..... -1.394 Her -1.462 including -1.476 alongside Highest and lowest feature weights for relation has_spouse: 5.296 wife 4.461 married 4.457 husband ..... ..... -1.487 including -1.528 Dennis -2.024 alongside Highest and lowest feature weights for relation is_a: 2.757 philosopher 2.657 family 2.575 genus ..... ..... -1.506 at -1.532 closely -2.008 Texas Highest and lowest feature weights for relation nationality: 2.963 born 2.013 Set 1.987 ruler ..... ..... -1.436 which -1.547 region -1.581 American Highest and lowest feature weights for relation parents: 4.619 son 4.551 daughter 4.135 father ..... ..... -1.536 Mehta -1.588 filmmaker -1.868 Germany Highest and lowest feature weights for relation place_of_birth: 3.999 born 3.034 birthplace 2.803 mayor ..... ..... -1.365 American -1.521 and -2.109 Oldham Highest and lowest feature weights for relation place_of_death: 2.267 died 1.976 assassinated 1.643 prominent ..... ..... -1.203 ” -1.282 that -1.679 Westminster Highest and lowest feature weights for relation profession: 2.780 2.652 philosopher 2.437 American ..... ..... -1.445 from -2.083 Texas -2.112 on Highest and lowest feature weights for relation worked_at: 3.060 CEO 2.913 professor 2.857 president ..... ..... -1.280 region -1.513 critique -1.601 or ###Markdown By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the `author` relation.)__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?<!--- SPOILER: Using `penalty='l1'` results in somewhat less intuitive feature weights, and about the same performance.- SPOILER: Using `penalty='l1', C=0.1` results in much more intuitive feature weights, but much worse performance.--> Discovering new relation instancesAnother way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation. ###Code rel_ext.find_new_relation_instances( dataset, featurizers=[simple_bag_of_words_featurizer]) ###Output Highest probability examples for relation adjoins: 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean') 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico') 1.000 KBTriple(rel='adjoins', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='adjoins', sbj='Lahore', obj='Pakistan') 1.000 KBTriple(rel='adjoins', sbj='Sicily', obj='Italy') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Sicily') 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Palermo') 1.000 KBTriple(rel='adjoins', sbj='Palermo', obj='Italy') Highest probability examples for relation author: 1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley') 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception') 1.000 KBTriple(rel='author', sbj='Allen_Ginsberg', obj='Howl') 1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri') 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='author', sbj='Howl', obj='Allen_Ginsberg') 1.000 KBTriple(rel='author', sbj='Dante_Alighieri', obj='Divine_Comedy') 1.000 KBTriple(rel='author', sbj='Comic_book', obj='Marvel_Comics') 1.000 KBTriple(rel='author', sbj='Marvel_Comics', obj='Comic_book') Highest probability examples for relation capital: 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh') 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh') 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka') 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu') 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan') 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi') 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India') 1.000 KBTriple(rel='capital', sbj='Sydney', obj='New_South_Wales') 1.000 KBTriple(rel='capital', sbj='New_South_Wales', obj='Sydney') Highest probability examples for relation contains: 1.000 KBTriple(rel='contains', sbj='Melbourne', obj='Australia') 1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand') 1.000 KBTriple(rel='contains', sbj='Edmonton', obj='Canada') 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Sydney') 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples') 1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife') 1.000 KBTriple(rel='contains', sbj='Uttar_Pradesh', obj='Lucknow') 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne') 1.000 KBTriple(rel='contains', sbj='Lucknow', obj='Uttar_Pradesh') Highest probability examples for relation film_performance: 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein') 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan') 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol') 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens') 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha') 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar') 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely') 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline') 1.000 KBTriple(rel='film_performance', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan') 1.000 KBTriple(rel='film_performance', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai') Highest probability examples for relation founders: 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad') 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer') 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX') 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk') 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan') 1.000 KBTriple(rel='founders', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='founders', sbj='Roman_Empire', obj='Titus') Highest probability examples for relation genre: 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook') 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight') 0.996 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield') 0.996 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi') 0.974 KBTriple(rel='genre', sbj='Kevin_Kline', obj='De-Lovely') 0.974 KBTriple(rel='genre', sbj='De-Lovely', obj='Kevin_Kline') 0.963 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon') 0.963 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd') 0.962 KBTriple(rel='genre', sbj='JYP_Entertainment', obj='South_Korea') 0.962 KBTriple(rel='genre', sbj='South_Korea', obj='JYP_Entertainment') Highest probability examples for relation has_sibling: 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera') 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera') 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle') 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright') 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum') 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_sibling', sbj='Dionne_Warwick', obj='Aretha_Franklin') 1.000 KBTriple(rel='has_sibling', sbj='Aretha_Franklin', obj='Dionne_Warwick') Highest probability examples for relation has_spouse: 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten') 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun') 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman') 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson') 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet') 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant') 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks') 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists') 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England') 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England') Highest probability examples for relation is_a: 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera') 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver') 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada') 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae') 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae') 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile') 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea') 1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird') 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae') Highest probability examples for relation nationality: 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire') 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Suryavarman_II') 1.000 KBTriple(rel='nationality', sbj='Suryavarman_II', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni') 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia') 1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district') 1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu') 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire') 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan') Highest probability examples for relation parents:
frenquency_distribution.ipynb
###Markdown **Import package ** ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import math data = np.array([160,165,167,164,160,160,166,160,161,150,152,173,160,155, 164,168,162,161,168,163,156,155,169,151,170,164,155,152, 160,163,160,155,157,156,158,158,161,154,161,156,172,153]) ###Output _____no_output_____ ###Markdown Sorted primitive table ###Code data= np.sort(data) data maxStature = data.max() minStature = data.min() np.unique(data, return_counts=True) plt.bar(data, data) ###Output _____no_output_____ ###Markdown Formula de Sturges i = 1 + 3.3 * np.log10(n) ###Code n= len(data) i = 1 + 3.3 * np.log10(n) i = round(i) ###Output _____no_output_____ ###Markdown **Interval Amplitude:**1. h= AA/i2. AA = maxStatura - minStatura ###Code AA = maxStature - minStature AA h= AA/i h = math.ceil(h) h ###Output _____no_output_____ ###Markdown **Building of the frequency distribution** ###Code interval = np.arange(minStature, maxStature +2,step = h) interval # frenquency, classes = np.histogram(data, bins=classes) plt.hist(data, bins='rice'); plt.hist(data, bins='sturges'); ###Output _____no_output_____ ###Markdown Frequency distribution in pandas with seabord ###Code dataset = pd.DataFrame({'data': data}) dataset.plot.hist(); sns.displot(dataset); dataset = pd.read_csv('census.csv') dataset.head() dataset.age.max(), dataset.age.min() dataset.age.plot.hist(); dataset['age'] =pd.cut(dataset['age'], bins=[0, 20, 40, 60, 80], labels=['level1', 'level2', 'level3', 'level4']) dataset.loc[dataset['age']=='level1'] ###Output _____no_output_____
notebooks/08_bandits_mdp.ipynb
###Markdown Naive bandits $\epsilon$-greedy and UCB1 ###Code def test_eps_greedy(n_steps, bandits, eps=.01): rew_rng = np.random.default_rng(seed) exp_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R = np.zeros(n_bandits) N = np.zeros(n_bandits) + 1e-5 optimal = bandits[np.argmax(bandits)] every_k = max(n_steps / 1000, 1) cum_regret, regret = 0, [] for i in range(n_steps): avg_ret = R/N if exp_rng.random() <= eps: # rand exploration a = exp_rng.choice(n_bandits) else: a = np.argmax(avg_ret) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r R[a] += r N[a] += 1 if i % every_k == 0: regret.append(cum_regret) return regret, R, N seed = 1337 np.random.seed(seed) n_bandits = 4 n_steps = 10000 bandits = np.random.random_sample(n_bandits) print(bandits) regret, R, N = test_eps_greedy(n_steps, bandits, .01) print(R/N, R, N) _ = plt.plot(regret, label='.01') regret, R, N = test_eps_greedy(n_steps, bandits, .05) print(R/N, R, N) _ = plt.plot(regret, label='.05') regret, R, N = test_eps_greedy(n_steps, bandits, .2) print(R/N, R, N) _ = plt.plot(regret, label='.2') plt.legend() def test_ucb1(n_steps, bandits, eps=.01): rew_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R = np.zeros(n_bandits) N = np.zeros(n_bandits) + 1e-5 optimal = bandits[np.argmax(bandits)] every_k = max(n_steps / 1000, 1) cum_regret, regret = 0, [] for i in range(1, n_steps+1): avg_ret = R/N U = (2 * np.log(i) / N)**.5 a = np.argmax(avg_ret + U) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r R[a] += r N[a] += 1 if i % every_k == 0: regret.append(cum_regret) return regret, R, N seed = 1337 n_bandits = 4 n_steps = 10000 np.random.seed(seed) bandits = np.random.random_sample(n_bandits) print(bandits) regret, R, N = test_ucb1(n_steps, bandits, .01) print(R/N, R, N) _ = plt.plot(regret, label='.01') seed = 1337 np.random.seed(seed) n_bandits = 4 n_steps = 1000 bandits = np.random.random_sample(n_bandits) print(bandits) regret, R, N = test_eps_greedy(n_steps, bandits, .01) print(R/N, R, N) _ = plt.plot(regret, label='.01') regret, R, N = test_eps_greedy(n_steps, bandits, .05) print(R/N, R, N) _ = plt.plot(regret, label='.05') regret, R, N = test_eps_greedy(n_steps, bandits, .2) print(R/N, R, N) _ = plt.plot(regret, label='.2') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() seed = 1337 np.random.seed(seed) n_bandits = 5 n_steps = 3000 bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) regret, R, N = test_eps_greedy(n_steps, bandits, .05) print(R/N, R, N) _ = plt.plot(regret, label='.05') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() ###Output _____no_output_____ ###Markdown Cell-based approximations ###Code def test_cells_ucb1_vb(n_steps, bandits, n_cells, k_cells): rew_rng = np.random.default_rng(seed) upd_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R = np.zeros((n_bandits, n_cells)) N = np.zeros((n_bandits, n_cells)) + 1e-5 optimal = bandits[np.argmax(bandits)] every_k = max(n_steps / 1000, 1) cum_regret, regret = 0, [] for i in range(1, n_steps+1): avg_ret = R.sum(axis=1) / N.sum(axis=1) U = ((k_cells / n_cells) * 2 * np.log(i) / N.mean(axis=1))**.5 a = np.argmax(avg_ret + U) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r upd_indices = upd_rng.choice(n_cells, size=k_cells, replace=False) R[a][upd_indices] += r N[a][upd_indices] += 1 if i % every_k == 0: regret.append(cum_regret) return regret, R.sum(axis=1) / N.sum(axis=1) seed = 1337 n_bandits = 4 n_steps = 3000 np.random.seed(seed) bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, 2) print(probs) _ = plt.plot(regret, label='2') regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, 4) print(probs) _ = plt.plot(regret, label='4') regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, 6) print(probs) _ = plt.plot(regret, label='6') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() def get_avg_ret2(R, N): return (R / N).avg(axis=1) def test_cells_ucb1_vb_crude(n_steps, bandits, n_cells, k_cells): rew_rng = np.random.default_rng(seed) upd_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R = np.zeros((n_bandits, n_cells)) N = np.zeros((n_bandits, n_cells)) + 1e-5 optimal = bandits[np.argmax(bandits)] every_k = max(n_steps / 1000, 1) cum_regret, regret = 0, [] for i in range(1, n_steps+1): avg_ret = R.sum(axis=1) / N.sum(axis=1) U = (2 * np.log(i * k_cells) / N.sum(axis=1))**.5 a = np.argmax(avg_ret + U) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r upd_indices = upd_rng.choice(n_cells, size=k_cells, replace=False) R[a][upd_indices] += r N[a][upd_indices] += 1 if i % every_k == 0: regret.append(cum_regret) return regret, R.sum(axis=1) / N.sum(axis=1) seed = 1337 n_steps = 3000 np.random.seed(seed) bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) regret, probs = test_cells_ucb1_vb_crude(n_steps, bandits, 8, 2) print(probs) _ = plt.plot(regret, label='2') regret, probs = test_cells_ucb1_vb_crude(n_steps, bandits, 8, 4) print(probs) _ = plt.plot(regret, label='4') regret, probs = test_cells_ucb1_vb_crude(n_steps, bandits, 8, 6) print(probs) _ = plt.plot(regret, label='6') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() def test_cells_ucb1_cb(n_steps, bandits, n_cells, k_cells): rew_rng = np.random.default_rng(seed) upd_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R = np.zeros((n_bandits, n_cells)) N = np.zeros((n_bandits, n_cells)) + 1e-5 optimal = bandits[np.argmax(bandits)] every_k = max(n_steps / 1000, 1) cum_regret, regret = 0, [] for i in range(1, n_steps+1): avg_ret = R / N U = ((k_cells / n_cells) * 2 * np.log(i) / N)**.5 a = np.argmax((avg_ret + U).mean(axis=1)) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r upd_indices = upd_rng.choice(n_cells, size=k_cells, replace=False) R[a][upd_indices] += r N[a][upd_indices] += 1 if i % every_k == 0: regret.append(cum_regret) return regret, (R / N).mean(axis=1) seed = 1337 n_steps = 3000 np.random.seed(seed) bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) regret, probs = test_cells_ucb1_cb(n_steps, bandits, 8, 2) print(probs) _ = plt.plot(regret, label='2') regret, probs = test_cells_ucb1_cb(n_steps, bandits, 8, 4) print(probs) _ = plt.plot(regret, label='4') regret, probs = test_cells_ucb1_cb(n_steps, bandits, 8, 6) print(probs) _ = plt.plot(regret, label='6') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() seed = 1337 np.random.seed(seed) n_bandits = 5 n_steps = 30000 bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) regret, R, N = test_eps_greedy(n_steps, bandits, .05) print(R/N, R, N) _ = plt.plot(regret, label='.05') for k_cells in [2, 5]: # regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, k_cells) # print(probs) # _ = plt.plot(regret, label=f'vb-{k_cells}') regret, probs = test_cells_ucb1_vb_crude(n_steps, bandits, 8, k_cells) print(probs) _ = plt.plot(regret, label=f'vbc-{k_cells}') regret, probs = test_cells_ucb1_cb(n_steps, bandits, 8, k_cells) print(probs) _ = plt.plot(regret, label=f'cb-{k_cells}') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() ###Output _____no_output_____ ###Markdown Non-orthogonal action encoding ###Code a = np.arange(40).reshape((5, 8)).ravel() action_cell_indices = [] for i in range(5): base_indices = list(range(i*8, (i+1)*8)) shared_indices = [ x if x < i*8 else x + 8 for x in np.random.choice((5-1)*8, 4, replace=False) ] action_cell_indices.append(sorted(base_indices + shared_indices)) x = np.array(action_cell_indices) x a[x] def init_cells(n_actions, n_cells, n_shared_cells): rng = np.random.default_rng(seed) R = np.zeros((n_actions, n_cells)).ravel() N = np.zeros((n_actions, n_cells)).ravel() + 1e-5 action_cell_indices = [] for i in range(n_actions): base_indices = list(range(i*n_cells, (i+1)*n_cells)) shared_indices = [ x if x < i*n_cells else x + n_cells for x in rng.choice((n_actions - 1) * 8, n_shared_cells, replace=False) ] action_cell_indices.append(sorted(base_indices + shared_indices)) return R, N, np.array(action_cell_indices) def test_cells_ucb1_vb(n_steps, bandits, n_cells, k_cells, n_shared_cells): rew_rng = np.random.default_rng(seed) upd_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R, N, action_cell_indices = init_cells(n_bandits, n_cells, n_shared_cells) optimal, every_k, cum_regret, regret = bandits[np.argmax(bandits)], max(n_steps / 1000, 1), 0, [] for i in range(1, n_steps+1): Q = R[action_cell_indices].sum(axis=1) / N[action_cell_indices].sum(axis=1) U = ((k_cells / n_cells) * 2 * np.log(i) / N[action_cell_indices].mean(axis=1))**.5 a = np.argmax(Q + U) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. upd_indices = upd_rng.choice(action_cell_indices[a], size=k_cells, replace=False) R[upd_indices] += r N[upd_indices] += 1 r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r if i % every_k == 0: regret.append(cum_regret) return regret, R[action_cell_indices].sum(axis=1) / N[action_cell_indices].sum(axis=1) seed = 1337 n_bandits = 4 n_steps = 100000 np.random.seed(seed) bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) k = 4 for n_shared_cells in [2, 4]: regret, probs = test_cells_ucb1_vb(n_steps, bandits, 12, k, n_shared_cells) print(probs) _ = plt.plot(regret, label=f'{k}-{n_shared_cells}') # regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, 4, 2) # print(probs) # _ = plt.plot(regret, label='4') # regret, probs = test_cells_ucb1_vb(n_steps, bandits, 8, 6, 2) # print(probs) # _ = plt.plot(regret, label='6') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() def init_cells(n_actions, n_cells, n_shared_cells): rng = np.random.default_rng(seed) R = np.zeros((n_actions, n_cells)).ravel() N = np.zeros((n_actions, n_cells)).ravel() + 1e-5 action_cell_indices = [] for i in range(n_actions): base_indices = list(range(i*n_cells, (i+1)*n_cells)) shared_indices = [ x if x < i*n_cells else x + n_cells for x in rng.choice((n_actions - 1) * 8, n_shared_cells, replace=False) ] action_cell_indices.append(sorted(base_indices + shared_indices)) return R, N, np.array(action_cell_indices) def test_cells_ucb1_cb(n_steps, bandits, n_cells, k_cells, n_shared_cells): rew_rng = np.random.default_rng(seed) upd_rng = np.random.default_rng(seed) n_bandits = bandits.shape[0] R, N, action_cell_indices = init_cells(n_bandits, n_cells, n_shared_cells) optimal, every_k, cum_regret, regret = bandits[np.argmax(bandits)], max(n_steps / 1000, 1), 0, [] for i in range(1, n_steps+1): Q = R / N U = ((k_cells / n_cells) * 2 * np.log(i) / N)**.5 a = np.argmax( (Q + U)[action_cell_indices].mean(axis=1) ) x = rew_rng.random() r = 1. if x <= bandits[a] else 0. upd_indices = upd_rng.choice(action_cell_indices[a], size=k_cells, replace=False) R[upd_indices] += r N[upd_indices] += 1 r_opt = 1. if x <= optimal else 0. cum_regret += r_opt - r if i % every_k == 0: regret.append(cum_regret) return regret, Q[action_cell_indices].mean(axis=1) seed = 1337 n_bandits = 4 n_steps = 100000 np.random.seed(seed) bandits = np.array([.1, .3, .4, .7, .8]) print(bandits) k = 4 for n_shared_cells in [2, 8]: regret, probs = test_cells_ucb1_cb(n_steps, bandits, 12, k, n_shared_cells) print(probs) _ = plt.plot(regret, label=f'{k}-{n_shared_cells}') n_shared_cells = 4 for k in [2, 4, 8, 16]: regret, probs = test_cells_ucb1_cb(n_steps, bandits, 12, k, n_shared_cells) print(probs) _ = plt.plot(regret, label=f'{k}-{n_shared_cells}') regret, R, N = test_ucb1(n_steps, bandits) print(R/N, R, N) _ = plt.plot(regret, label='usb') plt.legend() ###Output _____no_output_____
colab_notebooks/03_Validate.ipynb
###Markdown Validate Colab to infere satellite images from mask pictures using a trained pix2pix model based on the code found in https://github.com/mrzhu-cool/pix2pix-pytorch Imports and parameters Imports ###Code # Accessing the files and preparing the dataset from google.colab import drive from os import listdir from os.path import join import os # Treating the images from PIL import Image import numpy as np import random import torch import torch.utils.data as data from torch.utils.data import DataLoader import torchvision.transforms as transforms from matplotlib.pyplot import imshow import matplotlib.pyplot as plt # Dealing with GPUs import torch.backends.cudnn as cudnn # Defining the networks import torch.nn as nn from torch.nn import init import functools from torch.optim import lr_scheduler import torch.optim as optim # Training from math import log10 import time import math # Tensorboard from torch.utils.tensorboard import SummaryWriter import datetime ###Output _____no_output_____ ###Markdown Parameters ###Code import argparse # Training settings parser = argparse.ArgumentParser(description='pix2pix-pytorch-implementation') # In the original code, dataset is required. We don't need it for the Inria Aerial Image Labelling Dataset parser.add_argument('--dataset', required=False, help='facades') parser.add_argument('--batch_size', type=int, default=1, help='training batch size') parser.add_argument('--test_batch_size', type=int, default=1, help='testing batch size') parser.add_argument('--direction', type=str, default='a2b', help='a2b or b2a') parser.add_argument('--input_nc', type=int, default=3, help='input image channels') parser.add_argument('--output_nc', type=int, default=3, help='output image channels') parser.add_argument('--ngf', type=int, default=64, help='generator filters in first conv layer') parser.add_argument('--ndf', type=int, default=64, help='discriminator filters in first conv layer') # Training epochs are defined by range(opt.epoch_count, opt.niter + opt.niter_decay + 1) # So, originally, the training script epochs from 1 to 201, which takes too long at the beginning # niter and niter_decay are changed to shorten the amount of time during development parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count') parser.add_argument('--niter', type=int, default=100, help='# of iter at starting learning rate') # 100 parser.add_argument('--niter_decay', type=int, default=100, help='# of iter to linearly decay learning rate to zero') # 100 parser.add_argument('--lr', type=float, default=0.0002, help='initial learning rate for adam') # 0.0002 parser.add_argument('--lr_policy', type=str, default='lambda', help='learning rate policy: lambda|step|plateau|cosine') parser.add_argument('--lr_decay_iters', type=int, default=50, help='multiply by a gamma every lr_decay_iters iterations') parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for adam. default=0.5') parser.add_argument('--cuda', action='store_true', help='use cuda?') parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use') parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123') parser.add_argument('--lamb', type=int, default=10, help='weight on L1 term in objective') # 10 # Activate or deactivate the use of Tensorboard parser.add_argument('--tb_active', type=bool, default=True, help='should tensorboard be used') # Deactivate for deep trainings # Which original image should be stored in Tensorboard. # Inria satellite images are 5000x5000 and consume much CPU and memory, so only # one image is saved to avoid using too many resources parser.add_argument('--tb_image', type=str, default='vienna1.tif', help='image to store in tensorboard') # Number of images saved to tensorboard. Only tb_image will be saved, so the progress # of generated images can be seen throw epochs. 5 images in 100 epochs means one # tb_image will be saved every 20 epochs. parser.add_argument('--tb_number_img', type=int, default=5, help='number of images saved to tensorboard') # Level of debug (cell output) parser.add_argument('--debug', type=int, default=1, help='level of debug from 0 (no debug) to 2 (verbose)') # Number of iteration messages per epoch. They have the form # ===> Epoch[{}]({}/{}): Loss_D: {:.4f} Loss_G: {:.4f} parser.add_argument('--iter_messages', type=int, default=4, help='number of output messages per epoch') # Number of epochs to save a checkpoint parser.add_argument('--checkpoint_epochs', type=int, default=50, help='number of epochs to save a checkpoint') # Stop training after checkpoint is saved. Useful in long trainings parser.add_argument('--stop_after_checkpoint', type=bool, default=True, help='stop training after a checkpoint has been saved') # As stated in https://stackoverflow.com/questions/48796169/how-to-fix-ipykernel-launcher-py-error-unrecognized-arguments-in-jupyter # at least an empty list must be passed to simulate a script execution with no parameters. # If no parameter is provided, parse_args tries to read _sys.argv[1:], which is not defined # in a colab execution training_args = ['--cuda', '--epoch_count=101', '--niter=250', '--niter_decay=250', '--lr=0.002', '--lamb=1', '--direction=a2b', '--batch_size=10', '--checkpoint_epochs=4', '--threads=0', '--debug=1', '--tb_number_img=500'] opt = parser.parse_args(training_args) train_dir = 'dataset/train' train_gt_dir = train_dir + '/gt' train_images_dir = train_dir + '/images' train_tensorboard_dir = train_dir + '/log' test_dir = 'dataset/test' test_gt_dir = test_dir + '/gt' test_images_dir = test_dir + '/images' test_tensorboard_dir = test_dir + '/log' if opt.cuda and not torch.cuda.is_available(): raise Exception("No GPU found, please run without --cuda") # cudnn.benchmark = True # torch.manual_seed(opt.seed) # if opt.cuda: # torch.cuda.manual_seed(opt.seed) device = torch.device("cuda:0" if opt.cuda else "cpu") ###Output _____no_output_____ ###Markdown Debug function ###Code def print_debug(level, text): """ Prints a debug message only if the level of the message is lower or equal to the debug level set in global variable debug """ # Accessing the global debug variable # global debug # The text will only be if level <= opt.debug: print(" [DEBUG] " + text) ###Output _____no_output_____ ###Markdown Accessing the dataset Connecting to Google Drive ###Code drive.mount('/content/drive') ###Output Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True). ###Markdown Defining the networks get_norm_layer ###Code def get_norm_layer(norm_type='instance'): if norm_type == 'batch': norm_layer = functools.partial(nn.BatchNorm2d, affine=True) elif norm_type == 'instance': norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) elif norm_type == 'switchable': norm_layer = SwitchNorm2d elif norm_type == 'none': norm_layer = None else: raise NotImplementedError('normalization layer [%s] is not found' % norm_type) return norm_layer ###Output _____no_output_____ ###Markdown get_scheduler ###Code def get_scheduler(optimizer, opt): if opt.lr_policy == 'lambda': def lambda_rule(epoch): lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.niter) / float(opt.niter_decay + 1) return lr_l scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) elif opt.lr_policy == 'step': scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) elif opt.lr_policy == 'plateau': scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) elif opt.lr_policy == 'cosine': scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.niter, eta_min=0) else: return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) return scheduler ###Output _____no_output_____ ###Markdown update_learning_rate ###Code # update learning rate (called once every epoch) def update_learning_rate(scheduler, optimizer): scheduler.step() lr = optimizer.param_groups[0]['lr'] print('learning rate = %.7f' % lr) ###Output _____no_output_____ ###Markdown init_weights ###Code def init_weights(net, init_type='normal', gain=0.02): def init_func(m): classname = m.__class__.__name__ if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): if init_type == 'normal': init.normal_(m.weight.data, 0.0, gain) elif init_type == 'xavier': init.xavier_normal_(m.weight.data, gain=gain) elif init_type == 'kaiming': init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') elif init_type == 'orthogonal': init.orthogonal_(m.weight.data, gain=gain) else: raise NotImplementedError('initialization method [%s] is not implemented' % init_type) if hasattr(m, 'bias') and m.bias is not None: init.constant_(m.bias.data, 0.0) elif classname.find('BatchNorm2d') != -1: init.normal_(m.weight.data, 1.0, gain) init.constant_(m.bias.data, 0.0) print('initialize network with %s' % init_type) net.apply(init_func) ###Output _____no_output_____ ###Markdown init_net ###Code def init_net(net, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net.to(gpu_id) init_weights(net, init_type, gain=init_gain) return net ###Output _____no_output_____ ###Markdown define_G ###Code def define_G(input_nc, output_nc, ngf, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net = None norm_layer = get_norm_layer(norm_type=norm) net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9) return init_net(net, init_type, init_gain, gpu_id) ###Output _____no_output_____ ###Markdown Class ResnetGenerator ###Code # Defines the generator that consists of Resnet blocks between a few # downsampling/upsampling operations. class ResnetGenerator(nn.Module): def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=9, padding_type='reflect'): assert(n_blocks >= 0) super(ResnetGenerator, self).__init__() self.input_nc = input_nc self.output_nc = output_nc self.ngf = ngf if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d self.inc = Inconv(input_nc, ngf, norm_layer, use_bias) self.down1 = Down(ngf, ngf * 2, norm_layer, use_bias) self.down2 = Down(ngf * 2, ngf * 4, norm_layer, use_bias) model = [] for i in range(n_blocks): model += [ResBlock(ngf * 4, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] self.resblocks = nn.Sequential(*model) self.up1 = Up(ngf * 4, ngf * 2, norm_layer, use_bias) self.up2 = Up(ngf * 2, ngf, norm_layer, use_bias) self.outc = Outconv(ngf, output_nc) def forward(self, input): out = {} # DTT No hay skip connections? out['in'] = self.inc(input) out['d1'] = self.down1(out['in']) out['d2'] = self.down2(out['d1']) out['bottle'] = self.resblocks(out['d2']) out['u1'] = self.up1(out['bottle']) out['u2'] = self.up2(out['u1']) return self.outc(out['u2']) ###Output _____no_output_____ ###Markdown Class Inconv ###Code class Inconv(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Inconv, self).__init__() self.inconv = nn.Sequential( nn.ReflectionPad2d(3), nn.Conv2d(in_ch, out_ch, kernel_size=7, padding=0, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.inconv(x) return x ###Output _____no_output_____ ###Markdown Class Down ###Code class Down(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Down, self).__init__() self.down = nn.Sequential( nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=2, padding=1, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.down(x) return x ###Output _____no_output_____ ###Markdown Class ResBlock ###Code # Define a Resnet block class ResBlock(nn.Module): def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias): super(ResBlock, self).__init__() self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias) def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias): conv_block = [] p = 0 if padding_type == 'reflect': conv_block += [nn.ReflectionPad2d(1)] elif padding_type == 'replicate': conv_block += [nn.ReplicationPad2d(1)] elif padding_type == 'zero': p = 1 else: raise NotImplementedError('padding [%s] is not implemented' % padding_type) conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)] if use_dropout: conv_block += [nn.Dropout(0.5)] p = 0 if padding_type == 'reflect': conv_block += [nn.ReflectionPad2d(1)] elif padding_type == 'replicate': conv_block += [nn.ReplicationPad2d(1)] elif padding_type == 'zero': p = 1 else: raise NotImplementedError('padding [%s] is not implemented' % padding_type) conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)] return nn.Sequential(*conv_block) def forward(self, x): # DTT 'x +' === skip connection!! out = x + self.conv_block(x) return nn.ReLU(True)(out) ###Output _____no_output_____ ###Markdown Class Up ###Code class Up(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Up, self).__init__() self.up = nn.Sequential( # nn.Upsample(scale_factor=2, mode='nearest'), # nn.Conv2d(in_ch, out_ch, # kernel_size=3, stride=1, # padding=1, bias=use_bias), nn.ConvTranspose2d(in_ch, out_ch, kernel_size=3, stride=2, padding=1, output_padding=1, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.up(x) return x ###Output _____no_output_____ ###Markdown Class Outconv ###Code class Outconv(nn.Module): def __init__(self, in_ch, out_ch): super(Outconv, self).__init__() self.outconv = nn.Sequential( nn.ReflectionPad2d(3), nn.Conv2d(in_ch, out_ch, kernel_size=7, padding=0), nn.Tanh() ) def forward(self, x): x = self.outconv(x) return x ###Output _____no_output_____ ###Markdown define_D ###Code def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', use_sigmoid=False, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net = None norm_layer = get_norm_layer(norm_type=norm) if netD == 'basic': net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer, use_sigmoid=use_sigmoid) elif netD == 'n_layers': net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer, use_sigmoid=use_sigmoid) elif netD == 'pixel': net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer, use_sigmoid=use_sigmoid) else: raise NotImplementedError('Discriminator model name [%s] is not recognized' % net) return init_net(net, init_type, init_gain, gpu_id) ###Output _____no_output_____ ###Markdown Class NLayerDiscriminator ###Code # Defines the PatchGAN discriminator with the specified arguments. class NLayerDiscriminator(nn.Module): def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False): super(NLayerDiscriminator, self).__init__() if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d kw = 4 padw = 1 sequence = [ nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True) ] nf_mult = 1 nf_mult_prev = 1 for n in range(1, n_layers): nf_mult_prev = nf_mult nf_mult = min(2**n, 8) sequence += [ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), norm_layer(ndf * nf_mult), nn.LeakyReLU(0.2, True) ] nf_mult_prev = nf_mult nf_mult = min(2**n_layers, 8) sequence += [ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), norm_layer(ndf * nf_mult), nn.LeakyReLU(0.2, True) ] sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] if use_sigmoid: sequence += [nn.Sigmoid()] self.model = nn.Sequential(*sequence) def forward(self, input): return self.model(input) ###Output _____no_output_____ ###Markdown Class PixelDiscriminator ###Code class PixelDiscriminator(nn.Module): def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d, use_sigmoid=False): super(PixelDiscriminator, self).__init__() if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d self.net = [ nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0), nn.LeakyReLU(0.2, True), nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias), norm_layer(ndf * 2), nn.LeakyReLU(0.2, True), nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)] if use_sigmoid: self.net.append(nn.Sigmoid()) self.net = nn.Sequential(*self.net) def forward(self, input): return self.net(input) ###Output _____no_output_____ ###Markdown Class GANLoss ###Code class GANLoss(nn.Module): def __init__(self, use_lsgan=True, target_real_label=1.0, target_fake_label=0.0): super(GANLoss, self).__init__() self.register_buffer('real_label', torch.tensor(target_real_label)) self.register_buffer('fake_label', torch.tensor(target_fake_label)) if use_lsgan: self.loss = nn.MSELoss() else: self.loss = nn.BCELoss() def get_target_tensor(self, input, target_is_real): if target_is_real: target_tensor = self.real_label else: target_tensor = self.fake_label return target_tensor.expand_as(input) def __call__(self, input, target_is_real): target_tensor = self.get_target_tensor(input, target_is_real) return self.loss(input, target_tensor) ###Output _____no_output_____ ###Markdown Creating the networks ###Code # NOP # Creating the networks from scratch net_g = define_G(opt.input_nc, opt.output_nc, opt.ngf, 'batch', False, 'normal', 0.02, gpu_id=device) net_d = define_D(opt.input_nc + opt.output_nc, opt.ndf, 'basic', gpu_id=device) criterionGAN = GANLoss().to(device) criterionL1 = nn.L1Loss().to(device) criterionMSE = nn.MSELoss().to(device) # setup optimizer optimizer_g = optim.Adam(net_g.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) optimizer_d = optim.Adam(net_d.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) net_g_scheduler = get_scheduler(optimizer_g, opt) net_d_scheduler = get_scheduler(optimizer_d, opt) ###Output initialize network with normal initialize network with normal ###Markdown Auxiliary functions denormalize_image & show_image ###Code def denormalize_image(image_tensor): """ Denormalizes an image coming from the network, usually, a generated image Parameters ---------- images_tensor: tensor representing a PIL image """ print_debug(2, "denormalize_image image tensor shape: {}".format(image_tensor.shape)) # cpu() to avoid error "can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first." image_numpy = image_tensor.cpu().data.float().numpy() # La transformación inversa sería simplemente min( (x*0.5)+0.5), 1) # (haciendo un clipping de los valores para que no nos salgan colores raros). # Tensorboard creo que ya gestiona lo del clipping; # pero viene de nuestra cuenta hacer la "desnormalización". print_debug(2, "denormalize_image image_numpy shape: {}".format(image_numpy.shape)) image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 print_debug(2, "denormalize_image image_numpy shape: {} after transposing".format(image_numpy.shape)) image_numpy = image_numpy.clip(0, 255) print_debug(2, "denormalize_image image_numpy shape: {} after clipping".format(image_numpy.shape)) image_numpy = image_numpy.astype(np.uint8) print_debug(2, "denormalize_image image_numpy shape: {} after converting to uint8".format(image_numpy.shape)) return image_numpy def show_image(image_tensor): """ Shows an image coming from the network Parameters """ image_numpy = denormalize_image(image_tensor) pil_image = Image.fromarray(image_numpy) imshow(pil_image) ###Output _____no_output_____ ###Markdown Show list/tuple of images in a grid ###Code # NOP # Based on utils.py save_img and the last answer in # https://stackoverflow.com/questions/46615554/how-to-display-multiple-images-in-one-figure-correctly/46616645#46616645 # Plots several figures in a tile def show_images_grid(images_tuple, nrows=1, ncols=1): """ Shows several images coming from a DataLoader based on DatasetFromFolder in a tile Parameters ---------- images_tuple: tuple of tensors representing images ncols : number of columns of subplots wanted in the display nrows : number of rows of subplots wanted in the figure """ fig, axeslist = plt.subplots(ncols=ncols, nrows=nrows, figsize=(15,15)) for ind,image_tensor in zip(range(len(images_tuple)), images_tuple): # First, denormalize image to allow it to be printable image_numpy = denormalize_image(image_tensor) image_pil = Image.fromarray(image_numpy) # imshow(image_pil) axeslist.ravel()[ind].imshow(image_pil, cmap=plt.jet()) # axeslist.ravel()[ind].set_title(title) axeslist.ravel()[ind].set_axis_off() plt.tight_layout() # optional ###Output _____no_output_____ ###Markdown Loading previous trained checkpoint ###Code !ls drive/MyDrive/"Colab Notebooks"/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/net*.pth # Loading already calculated weights # net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/train/trainedModels/netG_model_epoch_100.pth', map_location=torch.device(device)).to(device) # net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/netG_model_epoch_600.pth', map_location=torch.device(device)).to(device) net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/netG_model_epoch_900.pth', map_location=torch.device(device)).to(device) # net_d = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/train/trainedModels/netD_model_epoch_100.pth', map_location=torch.device(device)).to(device) ###Output _____no_output_____ ###Markdown Validating with images ###Code common_path = 'drive/MyDrive/Colab Notebooks/AIDL/Project/train/' mask_image_path = common_path + 'gt/vienna29.tif' # 11 or 29 satellite_image_path = common_path + 'images/vienna29.tif' ###Output _____no_output_____ ###Markdown Show mask ###Code mask_image = Image.open(mask_image_path) imshow(mask_image) ###Output _____no_output_____ ###Markdown Show ground truth ###Code satellite_image = Image.open(satellite_image_path) imshow(satellite_image) ###Output _____no_output_____ ###Markdown Generate image and show it ###Code transform_list = [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))] transform = transforms.Compose(transform_list) # 1. The mask image has only one channel. The generator (and the transforms.Normalize) # expect 3 channel images # 2. Non 256x256 images break the generator with "CUDA out of memory. Tried to allocate # 2.98 GiB (GPU 0; 14.73 GiB total capacity; 12.54 GiB already allocated; # 1.25 GiB free; 12.54 GiB reserved in total by PyTorch)" # So, the mask is resized rgb_mask_image = mask_image.convert('RGB').resize((256, 256), Image.BICUBIC) img = transform(rgb_mask_image) input = img.unsqueeze(0).to(device) out = net_g(input) generated_image = out.detach().squeeze(0).cpu() show_image(generated_image) ###Output _____no_output_____ ###Markdown Validate Colab to infere satellite images from mask pictures using a trained pix2pix model based on the code found in https://github.com/mrzhu-cool/pix2pix-pytorch Imports and parameters Imports ###Code # Accessing the files and preparing the dataset from google.colab import drive from os import listdir from os.path import join import os # Treating the images from PIL import Image import numpy as np import random import torch import torch.utils.data as data from torch.utils.data import DataLoader import torchvision.transforms as transforms from matplotlib.pyplot import imshow import matplotlib.pyplot as plt # Dealing with GPUs import torch.backends.cudnn as cudnn # Defining the networks import torch.nn as nn from torch.nn import init import functools from torch.optim import lr_scheduler import torch.optim as optim # Training from math import log10 import time import math # Tensorboard from torch.utils.tensorboard import SummaryWriter import datetime ###Output _____no_output_____ ###Markdown Parameters ###Code import argparse # Training settings parser = argparse.ArgumentParser(description='pix2pix-pytorch-implementation') # In the original code, dataset is required. We don't need it for the Inria Aerial Image Labelling Dataset parser.add_argument('--dataset', required=False, help='facades') parser.add_argument('--batch_size', type=int, default=1, help='training batch size') parser.add_argument('--test_batch_size', type=int, default=1, help='testing batch size') parser.add_argument('--direction', type=str, default='a2b', help='a2b or b2a') parser.add_argument('--input_nc', type=int, default=3, help='input image channels') parser.add_argument('--output_nc', type=int, default=3, help='output image channels') parser.add_argument('--ngf', type=int, default=64, help='generator filters in first conv layer') parser.add_argument('--ndf', type=int, default=64, help='discriminator filters in first conv layer') # Training epochs are defined by range(opt.epoch_count, opt.niter + opt.niter_decay + 1) # So, originally, the training script epochs from 1 to 201, which takes too long at the beginning # niter and niter_decay are changed to shorten the amount of time during development parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count') parser.add_argument('--niter', type=int, default=100, help='# of iter at starting learning rate') # 100 parser.add_argument('--niter_decay', type=int, default=100, help='# of iter to linearly decay learning rate to zero') # 100 parser.add_argument('--lr', type=float, default=0.0002, help='initial learning rate for adam') # 0.0002 parser.add_argument('--lr_policy', type=str, default='lambda', help='learning rate policy: lambda|step|plateau|cosine') parser.add_argument('--lr_decay_iters', type=int, default=50, help='multiply by a gamma every lr_decay_iters iterations') parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for adam. default=0.5') parser.add_argument('--cuda', action='store_true', help='use cuda?') parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use') parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123') parser.add_argument('--lamb', type=int, default=10, help='weight on L1 term in objective') # 10 # Activate or deactivate the use of Tensorboard parser.add_argument('--tb_active', type=bool, default=True, help='should tensorboard be used') # Deactivate for deep trainings # Which original image should be stored in Tensorboard. # Inria satellite images are 5000x5000 and consume much CPU and memory, so only # one image is saved to avoid using too many resources parser.add_argument('--tb_image', type=str, default='vienna1.tif', help='image to store in tensorboard') # Number of images saved to tensorboard. Only tb_image will be saved, so the progress # of generated images can be seen throw epochs. 5 images in 100 epochs means one # tb_image will be saved every 20 epochs. parser.add_argument('--tb_number_img', type=int, default=5, help='number of images saved to tensorboard') # Level of debug (cell output) parser.add_argument('--debug', type=int, default=1, help='level of debug from 0 (no debug) to 2 (verbose)') # Number of iteration messages per epoch. They have the form # ===> Epoch[{}]({}/{}): Loss_D: {:.4f} Loss_G: {:.4f} parser.add_argument('--iter_messages', type=int, default=4, help='number of output messages per epoch') # Number of epochs to save a checkpoint parser.add_argument('--checkpoint_epochs', type=int, default=50, help='number of epochs to save a checkpoint') # Stop training after checkpoint is saved. Useful in long trainings parser.add_argument('--stop_after_checkpoint', type=bool, default=True, help='stop training after a checkpoint has been saved') # As stated in https://stackoverflow.com/questions/48796169/how-to-fix-ipykernel-launcher-py-error-unrecognized-arguments-in-jupyter # at least an empty list must be passed to simulate a script execution with no parameters. # If no parameter is provided, parse_args tries to read _sys.argv[1:], which is not defined # in a colab execution training_args = ['--cuda', '--epoch_count=101', '--niter=250', '--niter_decay=250', '--lr=0.002', '--lamb=1', '--direction=a2b', '--batch_size=10', '--checkpoint_epochs=4', '--threads=0', '--debug=1', '--tb_number_img=500'] opt = parser.parse_args(training_args) train_dir = 'dataset/train' train_gt_dir = train_dir + '/gt' train_images_dir = train_dir + '/images' train_tensorboard_dir = train_dir + '/log' test_dir = 'dataset/test' test_gt_dir = test_dir + '/gt' test_images_dir = test_dir + '/images' test_tensorboard_dir = test_dir + '/log' if opt.cuda and not torch.cuda.is_available(): raise Exception("No GPU found, please run without --cuda") # cudnn.benchmark = True # torch.manual_seed(opt.seed) # if opt.cuda: # torch.cuda.manual_seed(opt.seed) device = torch.device("cuda:0" if opt.cuda else "cpu") ###Output _____no_output_____ ###Markdown Debug function ###Code def print_debug(level, text): """ Prints a debug message only if the level of the message is lower or equal to the debug level set in global variable debug """ # Accessing the global debug variable # global debug # The text will only be if level <= opt.debug: print(" [DEBUG] " + text) ###Output _____no_output_____ ###Markdown Accessing the dataset Connecting to Google Drive ###Code drive.mount('/content/drive') ###Output Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True). ###Markdown Defining the networks get_norm_layer ###Code def get_norm_layer(norm_type='instance'): if norm_type == 'batch': norm_layer = functools.partial(nn.BatchNorm2d, affine=True) elif norm_type == 'instance': norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) elif norm_type == 'switchable': norm_layer = SwitchNorm2d elif norm_type == 'none': norm_layer = None else: raise NotImplementedError('normalization layer [%s] is not found' % norm_type) return norm_layer ###Output _____no_output_____ ###Markdown get_scheduler ###Code def get_scheduler(optimizer, opt): if opt.lr_policy == 'lambda': def lambda_rule(epoch): lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.niter) / float(opt.niter_decay + 1) return lr_l scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) elif opt.lr_policy == 'step': scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) elif opt.lr_policy == 'plateau': scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) elif opt.lr_policy == 'cosine': scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.niter, eta_min=0) else: return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) return scheduler ###Output _____no_output_____ ###Markdown update_learning_rate ###Code # update learning rate (called once every epoch) def update_learning_rate(scheduler, optimizer): scheduler.step() lr = optimizer.param_groups[0]['lr'] print('learning rate = %.7f' % lr) ###Output _____no_output_____ ###Markdown init_weights ###Code def init_weights(net, init_type='normal', gain=0.02): def init_func(m): classname = m.__class__.__name__ if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): if init_type == 'normal': init.normal_(m.weight.data, 0.0, gain) elif init_type == 'xavier': init.xavier_normal_(m.weight.data, gain=gain) elif init_type == 'kaiming': init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') elif init_type == 'orthogonal': init.orthogonal_(m.weight.data, gain=gain) else: raise NotImplementedError('initialization method [%s] is not implemented' % init_type) if hasattr(m, 'bias') and m.bias is not None: init.constant_(m.bias.data, 0.0) elif classname.find('BatchNorm2d') != -1: init.normal_(m.weight.data, 1.0, gain) init.constant_(m.bias.data, 0.0) print('initialize network with %s' % init_type) net.apply(init_func) ###Output _____no_output_____ ###Markdown init_net ###Code def init_net(net, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net.to(gpu_id) init_weights(net, init_type, gain=init_gain) return net ###Output _____no_output_____ ###Markdown define_G ###Code def define_G(input_nc, output_nc, ngf, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net = None norm_layer = get_norm_layer(norm_type=norm) net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9) return init_net(net, init_type, init_gain, gpu_id) ###Output _____no_output_____ ###Markdown Class ResnetGenerator ###Code # Defines the generator that consists of Resnet blocks between a few # downsampling/upsampling operations. class ResnetGenerator(nn.Module): def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=9, padding_type='reflect'): assert(n_blocks >= 0) super(ResnetGenerator, self).__init__() self.input_nc = input_nc self.output_nc = output_nc self.ngf = ngf if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d self.inc = Inconv(input_nc, ngf, norm_layer, use_bias) self.down1 = Down(ngf, ngf * 2, norm_layer, use_bias) self.down2 = Down(ngf * 2, ngf * 4, norm_layer, use_bias) model = [] for i in range(n_blocks): model += [ResBlock(ngf * 4, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] self.resblocks = nn.Sequential(*model) self.up1 = Up(ngf * 4, ngf * 2, norm_layer, use_bias) self.up2 = Up(ngf * 2, ngf, norm_layer, use_bias) self.outc = Outconv(ngf, output_nc) def forward(self, input): out = {} # DTT No hay skip connections? out['in'] = self.inc(input) out['d1'] = self.down1(out['in']) out['d2'] = self.down2(out['d1']) out['bottle'] = self.resblocks(out['d2']) out['u1'] = self.up1(out['bottle']) out['u2'] = self.up2(out['u1']) return self.outc(out['u2']) ###Output _____no_output_____ ###Markdown Class Inconv ###Code class Inconv(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Inconv, self).__init__() self.inconv = nn.Sequential( nn.ReflectionPad2d(3), nn.Conv2d(in_ch, out_ch, kernel_size=7, padding=0, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.inconv(x) return x ###Output _____no_output_____ ###Markdown Class Down ###Code class Down(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Down, self).__init__() self.down = nn.Sequential( nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=2, padding=1, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.down(x) return x ###Output _____no_output_____ ###Markdown Class ResBlock ###Code # Define a Resnet block class ResBlock(nn.Module): def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias): super(ResBlock, self).__init__() self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias) def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias): conv_block = [] p = 0 if padding_type == 'reflect': conv_block += [nn.ReflectionPad2d(1)] elif padding_type == 'replicate': conv_block += [nn.ReplicationPad2d(1)] elif padding_type == 'zero': p = 1 else: raise NotImplementedError('padding [%s] is not implemented' % padding_type) conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)] if use_dropout: conv_block += [nn.Dropout(0.5)] p = 0 if padding_type == 'reflect': conv_block += [nn.ReflectionPad2d(1)] elif padding_type == 'replicate': conv_block += [nn.ReplicationPad2d(1)] elif padding_type == 'zero': p = 1 else: raise NotImplementedError('padding [%s] is not implemented' % padding_type) conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)] return nn.Sequential(*conv_block) def forward(self, x): # DTT 'x +' === skip connection!! out = x + self.conv_block(x) return nn.ReLU(True)(out) ###Output _____no_output_____ ###Markdown Class Up ###Code class Up(nn.Module): def __init__(self, in_ch, out_ch, norm_layer, use_bias): super(Up, self).__init__() self.up = nn.Sequential( # nn.Upsample(scale_factor=2, mode='nearest'), # nn.Conv2d(in_ch, out_ch, # kernel_size=3, stride=1, # padding=1, bias=use_bias), nn.ConvTranspose2d(in_ch, out_ch, kernel_size=3, stride=2, padding=1, output_padding=1, bias=use_bias), norm_layer(out_ch), nn.ReLU(True) ) def forward(self, x): x = self.up(x) return x ###Output _____no_output_____ ###Markdown Class Outconv ###Code class Outconv(nn.Module): def __init__(self, in_ch, out_ch): super(Outconv, self).__init__() self.outconv = nn.Sequential( nn.ReflectionPad2d(3), nn.Conv2d(in_ch, out_ch, kernel_size=7, padding=0), nn.Tanh() ) def forward(self, x): x = self.outconv(x) return x ###Output _____no_output_____ ###Markdown define_D ###Code def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', use_sigmoid=False, init_type='normal', init_gain=0.02, gpu_id='cuda:0'): net = None norm_layer = get_norm_layer(norm_type=norm) if netD == 'basic': net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer, use_sigmoid=use_sigmoid) elif netD == 'n_layers': net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer, use_sigmoid=use_sigmoid) elif netD == 'pixel': net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer, use_sigmoid=use_sigmoid) else: raise NotImplementedError('Discriminator model name [%s] is not recognized' % net) return init_net(net, init_type, init_gain, gpu_id) ###Output _____no_output_____ ###Markdown Class NLayerDiscriminator ###Code # Defines the PatchGAN discriminator with the specified arguments. class NLayerDiscriminator(nn.Module): def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False): super(NLayerDiscriminator, self).__init__() if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d kw = 4 padw = 1 sequence = [ nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True) ] nf_mult = 1 nf_mult_prev = 1 for n in range(1, n_layers): nf_mult_prev = nf_mult nf_mult = min(2**n, 8) sequence += [ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), norm_layer(ndf * nf_mult), nn.LeakyReLU(0.2, True) ] nf_mult_prev = nf_mult nf_mult = min(2**n_layers, 8) sequence += [ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), norm_layer(ndf * nf_mult), nn.LeakyReLU(0.2, True) ] sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] if use_sigmoid: sequence += [nn.Sigmoid()] self.model = nn.Sequential(*sequence) def forward(self, input): return self.model(input) ###Output _____no_output_____ ###Markdown Class PixelDiscriminator ###Code class PixelDiscriminator(nn.Module): def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d, use_sigmoid=False): super(PixelDiscriminator, self).__init__() if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d self.net = [ nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0), nn.LeakyReLU(0.2, True), nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias), norm_layer(ndf * 2), nn.LeakyReLU(0.2, True), nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)] if use_sigmoid: self.net.append(nn.Sigmoid()) self.net = nn.Sequential(*self.net) def forward(self, input): return self.net(input) ###Output _____no_output_____ ###Markdown Class GANLoss ###Code class GANLoss(nn.Module): def __init__(self, use_lsgan=True, target_real_label=1.0, target_fake_label=0.0): super(GANLoss, self).__init__() self.register_buffer('real_label', torch.tensor(target_real_label)) self.register_buffer('fake_label', torch.tensor(target_fake_label)) if use_lsgan: self.loss = nn.MSELoss() else: self.loss = nn.BCELoss() def get_target_tensor(self, input, target_is_real): if target_is_real: target_tensor = self.real_label else: target_tensor = self.fake_label return target_tensor.expand_as(input) def __call__(self, input, target_is_real): target_tensor = self.get_target_tensor(input, target_is_real) return self.loss(input, target_tensor) ###Output _____no_output_____ ###Markdown Creating the networks ###Code # NOP # Creating the networks from scratch net_g = define_G(opt.input_nc, opt.output_nc, opt.ngf, 'batch', False, 'normal', 0.02, gpu_id=device) net_d = define_D(opt.input_nc + opt.output_nc, opt.ndf, 'basic', gpu_id=device) criterionGAN = GANLoss().to(device) criterionL1 = nn.L1Loss().to(device) criterionMSE = nn.MSELoss().to(device) # setup optimizer optimizer_g = optim.Adam(net_g.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) optimizer_d = optim.Adam(net_d.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) net_g_scheduler = get_scheduler(optimizer_g, opt) net_d_scheduler = get_scheduler(optimizer_d, opt) ###Output initialize network with normal initialize network with normal ###Markdown Auxiliary functions denormalize_image & show_image ###Code def denormalize_image(image_tensor): """ Denormalizes an image coming from the network, usually, a generated image Parameters ---------- images_tensor: tensor representing a PIL image """ print_debug(2, "denormalize_image image tensor shape: {}".format(image_tensor.shape)) # cpu() to avoid error "can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first." image_numpy = image_tensor.cpu().data.float().numpy() # La transformación inversa sería simplemente min( (x*0.5)+0.5), 1) # (haciendo un clipping de los valores para que no nos salgan colores raros). # Tensorboard creo que ya gestiona lo del clipping; # pero viene de nuestra cuenta hacer la "desnormalización". print_debug(2, "denormalize_image image_numpy shape: {}".format(image_numpy.shape)) image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 print_debug(2, "denormalize_image image_numpy shape: {} after transposing".format(image_numpy.shape)) image_numpy = image_numpy.clip(0, 255) print_debug(2, "denormalize_image image_numpy shape: {} after clipping".format(image_numpy.shape)) image_numpy = image_numpy.astype(np.uint8) print_debug(2, "denormalize_image image_numpy shape: {} after converting to uint8".format(image_numpy.shape)) return image_numpy def show_image(image_tensor): """ Shows an image coming from the network Parameters """ image_numpy = denormalize_image(image_tensor) pil_image = Image.fromarray(image_numpy) imshow(pil_image) ###Output _____no_output_____ ###Markdown Show list/tuple of images in a grid ###Code # NOP # Based on utils.py save_img and the last answer in # https://stackoverflow.com/questions/46615554/how-to-display-multiple-images-in-one-figure-correctly/46616645#46616645 # Plots several figures in a tile def show_images_grid(images_tuple, nrows=1, ncols=1): """ Shows several images coming from a DataLoader based on DatasetFromFolder in a tile Parameters ---------- images_tuple: tuple of tensors representing images ncols : number of columns of subplots wanted in the display nrows : number of rows of subplots wanted in the figure """ fig, axeslist = plt.subplots(ncols=ncols, nrows=nrows, figsize=(15,15)) for ind,image_tensor in zip(range(len(images_tuple)), images_tuple): # First, denormalize image to allow it to be printable image_numpy = denormalize_image(image_tensor) image_pil = Image.fromarray(image_numpy) # imshow(image_pil) axeslist.ravel()[ind].imshow(image_pil, cmap=plt.jet()) # axeslist.ravel()[ind].set_title(title) axeslist.ravel()[ind].set_axis_off() plt.tight_layout() # optional ###Output _____no_output_____ ###Markdown Loading previous trained checkpoint ###Code !ls drive/MyDrive/"Colab Notebooks"/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/net*.pth # Loading already calculated weights # net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/train/trainedModels/netG_model_epoch_100.pth', map_location=torch.device(device)).to(device) # net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/netG_model_epoch_600.pth', map_location=torch.device(device)).to(device) net_g = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/AerialImageDataset-pix2pix/checkpoint/netG_model_epoch_900.pth', map_location=torch.device(device)).to(device) # net_d = torch.load('drive/MyDrive/Colab Notebooks/AIDL/Project/train/trainedModels/netD_model_epoch_100.pth', map_location=torch.device(device)).to(device) ###Output _____no_output_____ ###Markdown Validating with images ###Code common_path = 'drive/MyDrive/Colab Notebooks/AIDL/Project/train/' mask_image_path = common_path + 'gt/vienna29.tif' # 11 or 29 satellite_image_path = common_path + 'images/vienna29.tif' ###Output _____no_output_____ ###Markdown Show mask ###Code mask_image = Image.open(mask_image_path) imshow(mask_image) ###Output _____no_output_____ ###Markdown Show ground truth ###Code satellite_image = Image.open(satellite_image_path) imshow(satellite_image) ###Output _____no_output_____ ###Markdown Generate image and show it ###Code transform_list = [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))] transform = transforms.Compose(transform_list) # 1. The mask image has only one channel. The generator (and the transforms.Normalize) # expect 3 channel images # 2. Non 256x256 images break the generator with "CUDA out of memory. Tried to allocate # 2.98 GiB (GPU 0; 14.73 GiB total capacity; 12.54 GiB already allocated; # 1.25 GiB free; 12.54 GiB reserved in total by PyTorch)" # So, the mask is resized rgb_mask_image = mask_image.convert('RGB').resize((256, 256), Image.BICUBIC) img = transform(rgb_mask_image) input = img.unsqueeze(0).to(device) out = net_g(input) generated_image = out.detach().squeeze(0).cpu() show_image(generated_image) ###Output _____no_output_____
.ipynb_checkpoints/split labels-checkpoint.ipynb
###Markdown split each file into a group in a list ###Code gb = full_labels.groupby('filename') grouped_list = [gb.get_group(x) for x in gb.groups] len(grouped_list) train_index = np.random.choice(len(grouped_list), size=160, replace=False) test_index = np.setdiff1d(list(range(200)), train_index) len(train_index), len(test_index) # take first 200 files train = pd.concat([grouped_list[i] for i in train_index]) test = pd.concat([grouped_list[i] for i in test_index]) len(train), len(test) train.to_csv('train_labels.csv', index=None) test.to_csv('test_labels.csv', index=None) ###Output _____no_output_____
quantopian/tutorials/1_getting_started_lesson1/notebook.ipynb
###Markdown Welcome to Quantopian! The Getting Started Tutorial will guide you through researching and developing a quantitative trading strategy on Quantopian. It covers many of the basics of Quantopian's API, and is designed for those who are new to the platform. All you need to get started on this tutorial is to have some basic [Python](https://docs.python.org/2.7/) programming skills. What is a Trading Algorithm? A trading algorithm is a computer program that defines a set of rules for buying and selling assets. Most trading algorithms make decisions based on mathematical or statistical models that are derived from research conducted on historical data. Where do I start? The first step to writing a trading algorithm is to find an economic or statistical relationship on which we can base our strategy. To do this, we can use Quantopian's Research environment to access and analyze historical datasets available in the platform. Research is a [Jupyter Notebook](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html) environment that allows us to run Python code in units called 'cells'. For example, the following code plots the daily closing price for Apple Inc. (AAPL), along with its 20 and 50 day moving averages: ###Code # Research environment functions from quantopian.research import prices, symbols # Pandas library: https://pandas.pydata.org/ import pandas as pd # Query historical pricing data for AAPL aapl_close = prices( assets=symbols('AAPL'), start='2013-01-01', end='2016-01-01', ) # Compute 20 and 50 day moving averages on # AAPL's pricing data aapl_sma20 = aapl_close.rolling(20).mean() aapl_sma50 = aapl_close.rolling(50).mean() # Combine results into a pandas DataFrame and plot pd.DataFrame({ 'AAPL': aapl_close, 'SMA20': aapl_sma20, 'SMA50': aapl_sma50 }).plot( title='AAPL Close Price / SMA Crossover' ); ###Output _____no_output_____
notebooks/test FewsTimeSeries and FewsTimeSeriesCollection.ipynb
###Markdown Test FewsTimeSeries and FewsTimeSeriesCollection ###Code import hkvfewspy as fewspy import matplotlib.pyplot as plt import os import pandas as pd ###Output _____no_output_____ ###Markdown `FewsTimeSeries`Import first series from Fews Pi-XML file. Even though file contains two series `FewsTimeSeries` will only import the first series in the file. ###Code fews_ts = fewspy.schemas.timeseries.FewsTimeSeries.from_pi_xml("./test_2_series.xml") ###Output _____no_output_____ ###Markdown Plot the series by calling plot on the FewsTimeSeries object (plot is passed on to the `plot` function in pandas for Series and DataFrames). ###Code fews_ts.plot(y="value") plt.show() ###Output _____no_output_____ ###Markdown View metadata: ###Code fews_ts.header fews_ts.to_pi_xml("./temp_1series.xml") ###Output _____no_output_____ ###Markdown `FewsTimeSeriesCollection`To import all series from a Fews Pi-XML file, use `FewsTimeSeriesCollection`: ###Code fews_ts_collection = fewspy.schemas.timeseries.FewsTimeSeriesCollection.from_pi_xml("./test_2_series.xml") ###Output _____no_output_____ ###Markdown Timeseries are collected in a DataFrame with header data as columns and the series in the events column: ###Code fews_ts_collection.timeseries ###Output _____no_output_____ ###Markdown Loop through timeseries to plot ###Code fig, ax = plt.subplots(1, 1) for i in range(fews_ts_collection.timeseries.shape[0]): series = fews_ts_collection.timeseries.events.iloc[i] series.plot(y="value", ax=ax, label=series.header["stationName"]) plt.show() ###Output _____no_output_____ ###Markdown Note that the objects stored in the timeseries DataFrame in the events column are actually `FewsTimeSeries` objects. This means that our earlier FewsTimeSeries object should be the same as the first timeseries in our FewsTimeSeriesCollection. ###Code fews_ts_collection.timeseries.events[0] ###Output _____no_output_____ ###Markdown We will now test whether the FewsTimeSeries in the collection is the same as the single timeseries we imported earlier. But first, it is important to realize that when we wrote our earlier `FewsTimeSeries` object `fews_ts` to XML the header (endDate and startDate) were automatically updated from the data. We will do the same for the first timeseries in the `FewsTimeSeriesCollection` object. ###Code # get start and end date from timeseries and update header fews_ts_collection.timeseries.events[0]._update_header_dates() ###Output _____no_output_____ ###Markdown Now test for equality: ###Code fews_ts_collection.timeseries.events[0] == fews_ts ###Output _____no_output_____ ###Markdown It is possible to add data to the collection with the `add_series` method ###Code help(fews_ts_collection.add_series) ###Output Help on method add_series in module hkvfewspy.schemas.timeseries: add_series(dfseries, metadata) method of hkvfewspy.schemas.timeseries.FewsTimeSeriesCollection instance Add series to PiTimeSeries object. Parameters ---------- dfseries: pd.DataFrame Timeseries to add, must have DateTimeIndex and have columns with name "value" and "flag" metadata: dict dictionary containing header. Common entries values for include 'x', 'y', 'lat', lon', 'missVal', 'stationName', 'type', 'units', 'moduleInstanceId', 'qualifierId', 'parameterId', 'locationId' Notes ----- It is unclear whether the entries in header are required or optional. Some possible values for header entries are shown below in case they need to be supplied: - 'missVal': np.nan - 'stationName': np.nan - 'units': 'm' - 'type': 'instantaneous' ###Markdown Use data from first timeseries as "new" series: ###Code new_series = fews_ts.timeseries new_metadata = fews_ts.header fews_ts_collection.add_series(new_series, new_metadata) ###Output _____no_output_____ ###Markdown View our added timeseries in the DataFrame: ###Code fews_ts_collection.timeseries ###Output _____no_output_____ ###Markdown We can write the FewsTimeSeriesCollection to file with the same command as before: ###Code fews_ts_collection.to_pi_xml("./temp_3series.xml") ###Output _____no_output_____ ###Markdown Querying a FEWS database and returning FewsTimeSeries objects ###Code # set client pi = fewspy.pi pi.setClient(wsdl='http://localhost:8081/FewsPiService/fewspiservice?wsdl') query = pi.setQueryParameters(prefill_defaults=True) query.parameterIds(['P.meting.dagcal']) query.moduleInstanceIds(['ImportCAW']) query.locationIds(['66011cal']) query.startTime(pd.datetime(2010,1,1)) query.endTime(pd.datetime(2018,7,1)) query.clientTimeZone('Europe/Amsterdam') query.query["onlyManualEdits"] = False query.query fews_ts2_coll = pi.getFewsTimeSeries(queryParameters=query) fews_ts2_coll fews_ts2_coll.timeseries ###Output _____no_output_____ ###Markdown We queried specifically to get one timeseries, so extract FewsTimeSeries object from the collection: ###Code fews_ts2 = fews_ts2_coll.timeseries.events[0] fews_ts2.header fews_ts2.timeseries.head() fews_ts2.plot(y="value") ###Output _____no_output_____ ###Markdown Test FewsTimeSeries and FewsTimeSeriesCollection ###Code import sys sys.path.append("..") import hkvfewspy as fewspy import matplotlib.pyplot as plt import os import pandas as pd ###Output _____no_output_____ ###Markdown `FewsTimeSeries`Import first series from Fews Pi-XML file. Even though file contains two series `FewsTimeSeries` will only import the first series in the file. ###Code fews_ts = fewspy.FewsTimeSeries.from_pi_xml("./test_2_series.xml") ###Output _____no_output_____ ###Markdown Plot the series by calling plot on the FewsTimeSeries object (plot is passed on to the `plot` function in pandas for Series and DataFrames). ###Code fews_ts.plot(y="value") plt.show() ###Output _____no_output_____ ###Markdown View metadata: ###Code fews_ts.header fews_ts.to_pi_xml("./temp_1series.xml") ###Output _____no_output_____ ###Markdown `FewsTimeSeriesCollection`To import all series from a Fews Pi-XML file, use `FewsTimeSeriesCollection`: ###Code fews_ts_collection = fewspy.FewsTimeSeriesCollection.from_pi_xml("./test_2_series.xml") ###Output _____no_output_____ ###Markdown Timeseries are collected in a DataFrame with header data as columns and the series in the events column: ###Code fews_ts_collection.timeseries ###Output _____no_output_____ ###Markdown Loop through timeseries to plot ###Code fig, ax = plt.subplots(1, 1) for i in range(fews_ts_collection.timeseries.shape[0]): series = fews_ts_collection.timeseries.events.iloc[i] series.plot(y="value", ax=ax, label=series.header["stationName"]) plt.show() ###Output _____no_output_____ ###Markdown Note that the objects stored in the timeseries DataFrame in the events column are actually `FewsTimeSeries` objects. This means that our earlier FewsTimeSeries object should be the same as the first timeseries in our FewsTimeSeriesCollection. ###Code fews_ts_collection.timeseries.events[0] ###Output _____no_output_____ ###Markdown We will now test whether the FewsTimeSeries in the collection is the same as the single timeseries we imported earlier. But first, it is important to realize that when we wrote our earlier `FewsTimeSeries` object `fews_ts` to XML the header (endDate and startDate) were automatically updated from the data. We will do the same for the first timeseries in the `FewsTimeSeriesCollection` object. ###Code # get start and end date from timeseries and update header fews_ts_collection.timeseries.events[0]._update_header_dates() ###Output _____no_output_____ ###Markdown Now test for equality: ###Code fews_ts_collection.timeseries.events[0] == fews_ts ###Output _____no_output_____ ###Markdown It is possible to add data to the collection with the `add_series` method ###Code help(fews_ts_collection.add_series) ###Output Help on method add_series in module hkvfewspy.timeseries: add_series(dfseries, metadata) method of hkvfewspy.timeseries.FewsTimeSeriesCollection instance Add series to PiTimeSeries object. Parameters ---------- dfseries: pd.DataFrame Timeseries to add, must have DateTimeIndex and have columns with name "value" and "flag" metadata: dict dictionary containing header. Common entries values for include 'x', 'y', 'lat', lon', 'missVal', 'stationName', 'type', 'units', 'moduleInstanceId', 'qualifierId', 'parameterId', 'locationId' Notes ----- It is unclear whether the entries in header are required or optional. Some possible values for header entries are shown below in case they need to be supplied: - 'missVal': np.nan - 'stationName': np.nan - 'units': 'm' - 'type': 'instantaneous' ###Markdown Use data from first timeseries as "new" series: ###Code new_series = fews_ts.timeseries new_metadata = fews_ts.header fews_ts_collection.add_series(new_series, new_metadata) ###Output _____no_output_____ ###Markdown View our added timeseries in the DataFrame: ###Code fews_ts_collection.timeseries ###Output _____no_output_____ ###Markdown We can write the FewsTimeSeriesCollection to file with the same command as before: ###Code fews_ts_collection.to_pi_xml("./temp_3series.xml") ###Output _____no_output_____ ###Markdown Querying a FEWS database and returning FewsTimeSeries objects ###Code # set client pi = fewspy.pi pi.setClient(wsdl='http://localhost:8081/FewsPiService/fewspiservice?wsdl') query = pi.setQueryParameters(prefill_defaults=True) query.parameterIds(['P.meting.dagcal']) query.moduleInstanceIds(['ImportCAW']) query.locationIds(['66011cal']) query.startTime(pd.datetime(2010,1,1)) query.endTime(pd.datetime(2018,7,1)) query.clientTimeZone('Europe/Amsterdam') query.query["onlyManualEdits"] = False query.query fews_ts2_coll = pi.getFewsTimeSeries(queryParameters=query) fews_ts2_coll fews_ts2_coll.timeseries ###Output _____no_output_____ ###Markdown We queried specifically to get one timeseries, so extract FewsTimeSeries object from the collection: ###Code fews_ts2 = fews_ts2_coll.timeseries.events[0] fews_ts2.header fews_ts2.timeseries.head() fews_ts2.plot(y="value") ###Output _____no_output_____
examples/01_solid_effect_and_overhauser_effect/se_oe_visualization.ipynb
###Markdown Microwave Field Strength $\gamma$$B_1$/2$\pi$ Dependence ###Code prefix = 'BDPA_Overhauser_gB1_0p2MHz_400MHz_263GHz_MAS8kHz' df_0p2mhz = assemble_results(prefix, data_dir) prefix = 'BDPA_Overhauser_400MHz_263GHz_MAS8kHz' df_1mhz = assemble_results(prefix, data_dir) prefix = 'BDPA_Overhauser_gB1_0p5MHz_400MHz_263GHz_MAS8kHz' df_0p5mhz = assemble_results(prefix, data_dir) fig = plt.figure(figsize=(8, 6), dpi=100) plt.plot(df_0p2mhz[0], df_0p2mhz[1], 'ro-', markerfacecolor='none', label='0.2MHz') plt.plot(df_0p5mhz[0], df_0p5mhz[1], 'gs-', markerfacecolor='none', label='0.5MHz') plt.plot(df_1mhz[0], df_1mhz[1], 'bD-', markerfacecolor='none', label='1.0MHz') plt.xlabel('Field (T)') plt.ylabel('Enhancement (a.u.)') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Dependence of Coupling Strengths Indicated by e-H Distances ###Code prefix = 'BDPA_Overhauser_gB1_0p5MHz_400MHz_263GHz_MAS8kHz' df_d5 = assemble_results(prefix, data_dir) prefix = 'BDPA_Overhauser_d4A_gB1_0p5MHz_400MHz_263GHz_MAS8kHz' df_d4 = assemble_results(prefix, data_dir) prefix = 'BDPA_Overhauser_d3A_gB1_0p5MHz_400MHz_263GHz_MAS8kHz' df_d3 = assemble_results(prefix, data_dir) fig = plt.figure(figsize=(8, 6), dpi=100) plt.plot(df_d3[0], df_d3[1], 'ro-', markerfacecolor='none', label='e-H 3A') plt.plot(df_d4[0], df_d4[1], 'gs-', markerfacecolor='none', label='e-H 4A') plt.plot(df_d5[0], df_d5[1], 'bD-', markerfacecolor='none', label='e-H 5A') plt.xlabel('Field (T)') plt.ylabel('Enhancement (a.u.)') plt.legend() plt.show() fig = plt.figure(figsize=(8, 6), dpi=100) plt.hlines(0, 9.3915, 9.4265, linestyle='dashed', color='k') plt.plot(df_d4[0], df_d4[1], 'ro-', markerfacecolor='none') plt.xlabel('Field (T)') plt.ylabel('Enhancement (a.u.)') plt.tick_params( direction='in', bottom=True, top=True, left=True, right=True ) plt.xlim(9.3915, 9.4265) # plt.savefig('SE_OE_400MHz_fp_v2.ps') plt.show() ###Output _____no_output_____
data_cleaning/data_loading_issues_Python.ipynb
###Markdown Data Loading Issues with PythonThis notebook demonstrates data cleaning principles in Python. First we take a look at some encoding issues using a contrived example, and then we work through an example of loading some data as published on the Open Data portal. Note that this example is constructed with Python 3, which uses UTF8 encodings by default as a major difference from Python 2. There is also a similar notebook focused on R.- [Part 1: Encoding Issues](Encoding-Issues)- [Part 2: Example: Open Data](Example:-Open-Data) ###Code %matplotlib inline import requests import os import urllib.parse import re import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Encoding IssuesComputers ultimately represent everything as a number, and over time, different operating systems, and different languages there have been many ways of representing text. Most encodings (except for ones designed specifically for Asian languages) are extensions of ASCII, which was designed for English, so the English alphabet and most common punctuation characters are usually not a problem, but ASCII does not include things like accented characters for French.To demonstrate some of the potential issues, I made a file in Notepad and saved it using the default ANSI encoding (this default is changing to UTF8 as of the May 2019 update of Windows 10). This shows French text with accents, and non-ASCII punctuation characters. This formatting is similar to Excel's settings when used in French to generate a CSV file - it uses a ; instead of , for the delimiter as , is the decimal separator.```DEPARTEMENT;DÉPENSES;PunctuationAdministration du pipe-line du Nord;123456,88;“–”Agence canadienne d’évaluation environnementale;999999,24;†•Agence de développement économique du Canada pour les régions du Québec;234567,00;€``` Issue 1: Can't read the file at all with UnicodeDecodeError ###Code # This will produce the error "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc9 in position 1: invalid continuation byte" try: pd.read_csv('data/NotepadFrench.csv') except UnicodeDecodeError as error: print(error) ###Output 'utf-8' codec can't decode byte 0xc9 in position 13: invalid continuation byte ###Markdown In order to fix the UnicodeDecodeError, you need to determine the right encoding to load the file. There are many possibilities, fully documented here: https://docs.python.org/3/library/codecs.htmlstandard-encodings. Issue 2: Not all characters loaded properly - missing values and mojibake`latin-1` (also known as `ISO-8859-1`) is a common encoding for Western European text including French, but it's not showing the advanced punctuation characters as they map to unprintable control characters (though they are there, as demonstrated by computing the length of the string). ###Code no_punctuation = pd.read_csv('data/NotepadFrench.csv', sep=";", encoding="latin-1") no_punctuation["Punctuation Length"] = no_punctuation["Punctuation"].str.len() no_punctuation ###Output _____no_output_____ ###Markdown Another possibility is that characters map to incorrect visible characters. Here we attempt to load it using IBM863, 'MS-DOS French Canada', which was used by MS-DOS systems in Canada. ###Code pd.read_csv('data/NotepadFrench.csv', sep=";", encoding="IBM863") ###Output _____no_output_____ ###Markdown `windows-1252` is a Windows specific superset of `ISO-8859-1` which includes several additional characters. It is one of several different 'ANSI Codepages' for representing different languages. This is the most likely option if a file was created on a Windows system in English, French, or most Western European languages. ###Code wrong_decimal = pd.read_csv('data/NotepadFrench.csv', sep=";", encoding="windows-1252") wrong_decimal ###Output _____no_output_____ ###Markdown While not strictly an encoding issue, one other issue with dealing with multiple languages is different localization settings like date formats and decimal separators. `DÉPENSES` has been read in as text rather than a number as Pandas expects a . not , by default. ###Code wrong_decimal["DÉPENSES"].sum() correct_sample = pd.read_csv('data/NotepadFrench.csv', sep=";", encoding="windows-1252", decimal=",") correct_sample correct_sample["DÉPENSES"].sum() ###Output _____no_output_____ ###Markdown Here we save a version of the file which plays nicely with Pandas defaults. ###Code correct_sample.to_csv('data/utf8French.csv', index=False) pd.read_csv('data/utf8French.csv') ###Output _____no_output_____ ###Markdown Issue 3: Loading UTF8 into ExcelExcel defaults to the ANSI codepage used by the operating system (e.g. Windows-1252), and will not properly load UTF8 text via File->Open.The solution is to import the text into Excel instead and choose the appropriate encoding.![Loading UTF8 into Excel using Text Import](ExcelImport.png) ###Code pd.read_csv('data/utf8French.csv', encoding="ansi") ###Output _____no_output_____ ###Markdown Example: Open DataFuel consumption ratings (NRCAN): https://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64In this scenario, I want to do some analysis on a dataset published to the open data portal. This dataset is spread over multiple files for different years, and there are some minor changes to the columns over time. Like many datasets on Open Data it has some features which make it easier for people to look at but harder for machines - the column names are formatted over two rows, and there is some extra descriptions of fields below the main data section. A bit more unusually, it also has extra blank lines between rows and for some years, there are lots of extra blank columns and rows at the end.First I wrote some code to download all the files matching a specific pattern because there's a lot of them. To start, I am downloading all of the English Fuel Consumption Ratings files for gas cars. ###Code def download_files(pattern): # Get all the CSV files associated with this dataset that match the pattern info_response = requests.get("https://open.canada.ca/data/api/action/package_show?id=98f1a129-f628-4ce4-b24d-6f16bf24dd64") dataset_info = info_response.json() print(dataset_info["result"]["notes_translated"]["en"]) files = [] directory = "downloads" if not os.path.exists(directory): os.mkdir(directory) en_gas_re = re.compile(pattern) for resource in dataset_info["result"]["resources"]: if resource['format'] == 'CSV': fname = os.path.basename(resource['url']) # Get just the filename fname = urllib.parse.unquote(fname) # The URLs will have things like %20 for spaces, this cleans it up if en_gas_re.match(fname) is not None: fname = os.path.join(directory, fname) files.append(fname) if not os.path.exists(fname): print('Downloading ' + resource['name'] + ' from ' + resource['url']) response = requests.get(resource['url']) with open(fname, "w", encoding='utf8') as csvfile: csvfile.write(response.text) return files en_consumption_files = download_files("MY[0-9]{4} Fuel Consumption Ratings.*csv") ###Output Datasets provide model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. To help you compare vehicles from different model years, the fuel consumption ratings for 1995 to 2014 vehicles have been adjusted to reflect the improved testing that is more representative of everyday driving. Note that these are approximate values that were generated from the original ratings, not from vehicle testing. For more information on fuel consumption testing, visit: https://www.nrcan.gc.ca/energy/efficiency/transportation/21008. To compare the fuel consumption information of 1995 to 2019 model year vehicles, use the fuel consumption ratings search tool at https://fcr-ccc.nrcan-rncan.gc.ca/en. (Data update: June 27, 2019) Downloading 2000 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2000%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2001 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2001%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2002 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2002%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2003 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2003%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2004 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2004%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2005 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2005%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2006 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2006%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2007 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2007%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2008 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2008%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2009 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2009%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2010 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2010%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2011 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2011%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2012 Fuel Consumption Ratings from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2012%20Fuel%20Consumption%20Ratings%205-cycle.csv Downloading 2013 Fuel Consumption Ratings (2018-07-17) from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2013%20Fuel%20Consumption%20Ratings%20(5-cycle).csv Downloading 2014 Fuel Consumption Ratings (2018-07-17) from https://www.nrcan.gc.ca/sites/www.rncan.gc.ca/files/oee/files/csv/MY2014%20Fuel%20Consumption%20Ratings%20(5-cycle).csv Downloading 2015 Fuel Consumption Ratings (2016-10-21) from https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/oee/files/csv/MY2015%20Fuel%20Consumption%20Ratings%20(5-cycle).csv Downloading 2016 Fuel Consumption Ratings (2017-01-04) from https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/oee/files/csv/MY2016%20Fuel%20Consumption%20Ratings.csv Downloading 2017 Fuel Consumption Ratings (2018-07-17) from https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/oee/files/csv/MY2017%20Fuel%20Consumption%20Ratings.csv Downloading 2018 Fuel Consumption Ratings (2019-01-29) from https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/oee/files/csv/MY2018%20Fuel%20Consumption%20Ratings.csv Downloading 2019 Fuel Consumption Ratings (2019-05-14) from https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/oee/files/csv/MY2019%20Fuel%20Consumption%20Ratings.csv ###Markdown Cleaning up just the most recent year step by stepIn this section, I go through the steps to look at and fix issues with one file. ###Code data = pd.read_csv("downloads/MY2019 Fuel Consumption Ratings.csv") data.head() data.tail() ###Output _____no_output_____ ###Markdown Looking at the top and the bottom of the file, I see that there are a lot of columns that look to be empty, and also a lot of empty rows at the bottom. Identify and drop the extra columnsLooking at the count() of a column tells us how many values are not NaN. Here we see many columns with count() of 0 ###Code data.count() def drop_empty_columns(data): empty_columns = data.count() == 0 # Get a list of the columns with no data empty_columns = empty_columns[empty_columns].index return data.drop(columns=empty_columns) data = drop_empty_columns(data) data.columns ###Output _____no_output_____ ###Markdown Update the column namesSince the column names are spread over multiple rows, some of the automatically inferred column names aren't very helpful. In this case we can copy the output from the column above into the code and use it as a start rather than retyping everything. ###Code columns = ['Year', 'Make', 'Model', 'Class', 'Engine Size', 'Cylinders', 'Transmission', 'Fuel Type', 'Fuel Consumption City', 'Fuel Consumption Hwy', 'Fuel Consumption Comb', 'Comb (mpg)', 'CO2 Emissions', 'CO2 Rating', 'Smog Rating'] data.columns = columns ###Output _____no_output_____ ###Markdown Drop rows which aren't data rowsIn this dataset, there are rows at the bottom which have some explanations spread over a couple columns. Let's drop everything that doesn't have at least 5 non-NA values to be safe. Be careful as this could drop rows containing missing data, but that isn't a problem with these files. The first row has more than 5 non-NA value so we'll drop it separately.If we wanted to drop only rows that are completely empty, we could use `data.dropna(axis=0, how='all')`. Setting `how='any'` would drop rows where anything is missing. You can also drop specific rows by index using `data.drop(0)` ###Code data.dropna(thresh=5) def drop_extra_rows(data): # Drop any row which don't have at least 5 non-NA values return data.drop(0).dropna(thresh=5) data = drop_extra_rows(data) data.head() ###Output _____no_output_____ ###Markdown Convert Data TypesBecause of the extra rows, Pandas interpreted most of the numeric columns as `object`, it's most generic data type, so we need to convert them so we can use them as numbers. ###Code data.describe() data.dtypes ###Output _____no_output_____ ###Markdown This attempts to convert all columns to numeric data types, ignoring those that can't be converted. An alternative for more control is to use `astype`. ###Code data = data.apply(pd.to_numeric, errors='ignore') data.dtypes ###Output _____no_output_____ ###Markdown All of the columns that are numbers are now being treated as numbers. ###Code data.describe() ###Output _____no_output_____ ###Markdown Putting it all together to look at more years ###Code def clean_data(df): df = drop_empty_columns(df) columns = ['Year', 'Make', 'Model', 'Class', 'Engine Size', 'Cylinders', 'Transmission', 'Fuel Type', 'Fuel Consumption City', 'Fuel Consumption Hwy', 'Fuel Consumption Comb', 'Comb (mpg)', 'CO2 Emissions', 'CO2 Rating', 'Smog Rating'] data.columns = columns df = drop_extra_rows(df) df = df.apply(pd.to_numeric, errors='ignore') return df my2000 = pd.read_csv("downloads/MY2000 Fuel Consumption Ratings 5-cycle.csv") my2000.head() my2000 = clean_data(my2000) my2000.tail() my2000.head() my2000.columns, data.columns ###Output _____no_output_____ ###Markdown Problem: Columns have changed over timeIt looks like there are 2 columns for CO2 and Smog ratings that were added in more recent years. This isn't too bad because it's just a couple columns at the end; if the columns were changing around a lot every year it could get complicated. ###Code # Update column names def set_columns(data, columns): if len(data.columns) < len(columns): diff = len(data.columns) - len(columns) data.columns = columns[:diff] # Add missing columns for i in range(diff, 0): data[columns[i]] = np.nan else: data.columns = columns return data def clean_data(df): columns = ['Year', 'Make', 'Model', 'Class', 'Engine Size', 'Cylinders', 'Transmission', 'Fuel Type', 'Fuel Consumption City', 'Fuel Consumption Hwy', 'Fuel Consumption Comb', 'Comb (mpg)', 'CO2 Emissions', 'CO2 Rating', 'Smog Rating'] df = drop_empty_columns(df) df = set_columns(df, columns) df = drop_extra_rows(df) df = df.apply(pd.to_numeric, errors='ignore') return df ###Output _____no_output_____ ###Markdown Putting it all together! ###Code def load_and_clean_emissions_files(files): data_frames = [] for f in en_consumption_files: df = pd.read_csv(f) df = clean_data(df) data_frames.append(df) return pd.concat(data_frames, ignore_index=True) combined = load_and_clean_emissions_files(en_consumption_files) combined.describe() combined.groupby('Year')["Fuel Consumption Comb"].mean().plot() ###Output _____no_output_____ ###Markdown And now I want to do all that again but using the French data files - maybe to compare that they're the same. Now that it's all set up, it's easy! ###Code fr_consumption_files = download_files("AM[0-9]{4} Cotes de consommation de carburant.*csv") combined_fr =load_and_clean_emissions_files(fr_consumption_files) # All of the numbers match! combined_fr.mean() == combined.mean() ###Output _____no_output_____ ###Markdown There's often more than one way to do itAnother way to deal with a two line header, or other scenarios where there is extra data at the start of a file, is to set the `names`, `skiprows`, and `header` options when reading in the file. Another similar option is `skipfooter` if you have a known number of footer lines to skip. However, with this dataset, we can't use the same code to handle both the varying numbers of columns and the extra junk columns easily on read. ###Code # It will deal with too many columns just fine newer_colnames = ['Year', 'Make', 'Model', 'Class', 'Engine Size', 'Cylinders', 'Transmission', 'Fuel Type', 'Fuel Consumption City', 'Fuel Consumption Hwy', 'Fuel Consumption Comb', 'Comb (mpg)', 'CO2 Emissions', 'CO2 Rating', 'Smog Rating'] pd.read_csv('downloads/MY2000 Fuel Consumption Ratings 5-cycle.csv', names=newer_colnames, skiprows=3, header=None).head() # But because of the extra blank columns in some of the files I also have to set usecols # but this throws an error if I use it on the older files pd.read_csv('downloads/MY2019 Fuel Consumption Ratings.csv', names=newer_colnames, usecols=range(0, len(newer_colnames)), skiprows=3, header=None).head() ###Output _____no_output_____ ###Markdown If our problem with column names was just that they were spread out over two rows, we could write some code to automatically merge them. ###Code # Update column names def merge_2_row_names(data): new_columns = [] for i, c in enumerate(data.columns): new_name = "" first_row_val = data.iloc[0, i] if c.startswith("Unnamed"): new_name = first_row_val elif not isinstance(first_row_val, str): new_name = c else: new_name = c + " " + first_row_val new_columns.append(new_name) data.columns = new_columns return data.drop(0) merge_2_row_names(pd.read_csv('downloads/MY2000 Fuel Consumption Ratings 5-cycle.csv')) ###Output _____no_output_____
lessons/ETLPipelines/18_final_exercise/18_final_exercise-solution.ipynb
###Markdown Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. ###Code # run this cell to create a database and a table, called gdp, to hold the gdp data # You do not need to change anything in this code cell import sqlite3 # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS gdp") # create the test table including project_id as a primary key cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));") conn.commit() conn.close() # Generator for reading in one line at a time # generators are useful for data sets that are too large to fit in RAM # You do not need to change anything in this code cell def extract_lines(file): while True: line = file.readline() if not line: break yield line # TODO: fill out the code wherever you find a TODO in this cell import pandas as pd import numpy as np import sqlite3 # transform the indicator data def transform_indicator_data(data, colnames): # get rid of quote marks for i, datum in enumerate(data): data[i] = datum.replace('"','') country = data[0] # filter out values that are not countries non_countries = ['World', 'High income', 'OECD members', 'Post-demographic dividend', 'IDA & IBRD total', 'Low & middle income', 'Middle income', 'IBRD only', 'East Asia & Pacific', 'Europe & Central Asia', 'North America', 'Upper middle income', 'Late-demographic dividend', 'European Union', 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] if country not in non_countries: data_array = np.array(data, ndmin=2) data_array.reshape(1, 63) df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan) df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1) # Reshape the data sets so that they are in long format df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp') results = [] for index, row in df_melt.iterrows(): country, countrycode, year, gdp = row if str(gdp) != 'nan': results.append([country, countrycode, year, gdp]) return results # TODO: fill out the code wherever you find a TODO in this cell def load_indicator_data(results): conn = sqlite3.connect('worldbank.db') cur = conn.cursor() if results: for result in results: countryname, countrycode, year, gdp = result sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp) # connect to database and execute query try: cur.execute(sql_string) except Exception as e: print('error occurred:', e, result) conn.commit() conn.close() return None # Execute this code cell to run the ETL pipeline # You do not need to change anything in this cell with open('../data/gdp_data.csv') as f: for line in extract_lines(f): data = line.split(',') if len(data) == 63: if data[0] == '"Country Name"': colnames = [] # get rid of quote marks for i, datum in enumerate(data): colnames.append(datum.replace('"','')) else: # transform and load the line of indicator data results = transform_indicator_data(data, colnames) load_indicator_data(results) # Execute this code cell to output the values in the gdp table # You do not need to change anything in this cell # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # create the test table including project_id as a primary key df = pd.read_sql("SELECT * FROM gdp", con=conn) conn.commit() conn.close() df ###Output _____no_output_____ ###Markdown Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. ###Code # run this cell to create a database and a table, called gdp, to hold the gdp data # You do not need to change anything in this code cell import sqlite3 # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS gdp") # create the test table including project_id as a primary key cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));") conn.commit() conn.close() # Generator for reading in one line at a time # generators are useful for data sets that are too large to fit in RAM # You do not need to change anything in this code cell def extract_lines(file): while True: line = file.readline() if not line: break yield line # TODO: fill out the code wherever you find a TODO in this cell import pandas as pd import numpy as np import sqlite3 # transform the indicator data def transform_indicator_data(data, colnames): # get rid of quote marks for i, datum in enumerate(data): data[i] = datum.replace('"','') country = data[0] # filter out values that are not countries non_countries = ['World', 'High income', 'OECD members', 'Post-demographic dividend', 'IDA & IBRD total', 'Low & middle income', 'Middle income', 'IBRD only', 'East Asia & Pacific', 'Europe & Central Asia', 'North America', 'Upper middle income', 'Late-demographic dividend', 'European Union', 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] if country not in non_countries: data_array = np.array(data, ndmin=2) data_array.reshape(1, 63) df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan) df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1) # Reshape the data sets so that they are in long format df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp') results = [] for index, row in df_melt.iterrows(): country, countrycode, year, gdp = row if str(gdp) != 'nan': results.append([country, countrycode, year, gdp]) return results # TODO: fill out the code wherever you find a TODO in this cell def load_indicator_data(results): conn = sqlite3.connect('worldbank.db') cur = conn.cursor() if results: for result in results: countryname, countrycode, year, gdp = result sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES \ ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp) # connect to database and execute query try: cur.execute(sql_string) except Exception as e: print('error occurred:', e, result) conn.commit() conn.close() return None # Execute this code cell to run the ETL pipeline # You do not need to change anything in this cell with open('../data/gdp_data.csv') as f: for line in extract_lines(f): data = line.split(',') if len(data) == 63: if data[0] == '"Country Name"': colnames = [] # get rid of quote marks for i, datum in enumerate(data): colnames.append(datum.replace('"','')) else: # transform and load the line of indicator data results = transform_indicator_data(data, colnames) load_indicator_data(results) # Execute this code cell to output the values in the gdp table # You do not need to change anything in this cell # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # create the test table including project_id as a primary key df = pd.read_sql("SELECT * FROM gdp", con=conn) conn.commit() conn.close() df ###Output _____no_output_____ ###Markdown Table of Contents1&nbsp;&nbsp;Final Exercise - Putting it All Together Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. ###Code # run this cell to create a database and a table, called gdp, to hold the gdp data # You do not need to change anything in this code cell import sqlite3 # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS gdp") # create the test table including project_id as a primary key cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));") conn.commit() conn.close() # Generator for reading in one line at a time # generators are useful for data sets that are too large to fit in RAM # You do not need to change anything in this code cell def extract_lines(file): while True: line = file.readline() if not line: break yield line # TODO: fill out the code wherever you find a TODO in this cell import pandas as pd import numpy as np import sqlite3 # transform the indicator data def transform_indicator_data(data, colnames): # get rid of quote marks for i, datum in enumerate(data): data[i] = datum.replace('"','') country = data[0] # filter out values that are not countries non_countries = ['World', 'High income', 'OECD members', 'Post-demographic dividend', 'IDA & IBRD total', 'Low & middle income', 'Middle income', 'IBRD only', 'East Asia & Pacific', 'Europe & Central Asia', 'North America', 'Upper middle income', 'Late-demographic dividend', 'European Union', 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] if country not in non_countries: data_array = np.array(data, ndmin=2) data_array.reshape(1, 63) df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan) df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1) # Reshape the data sets so that they are in long format df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp') results = [] for index, row in df_melt.iterrows(): country, countrycode, year, gdp = row if str(gdp) != 'nan': results.append([country, countrycode, year, gdp]) return results # TODO: fill out the code wherever you find a TODO in this cell def load_indicator_data(results): conn = sqlite3.connect('worldbank.db') cur = conn.cursor() if results: for result in results: countryname, countrycode, year, gdp = result sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp) # connect to database and execute query try: cur.execute(sql_string) except Exception as e: print('error occurred:', e, result) conn.commit() conn.close() return None # Execute this code cell to run the ETL pipeline # You do not need to change anything in this cell with open('../data/gdp_data.csv') as f: for line in extract_lines(f): data = line.split(',') if len(data) == 63: if data[0] == '"Country Name"': colnames = [] # get rid of quote marks for i, datum in enumerate(data): colnames.append(datum.replace('"','')) else: # transform and load the line of indicator data results = transform_indicator_data(data, colnames) load_indicator_data(results) # Execute this code cell to output the values in the gdp table # You do not need to change anything in this cell # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # create the test table including project_id as a primary key df = pd.read_sql("SELECT * FROM gdp", con=conn) conn.commit() conn.close() df ###Output _____no_output_____ ###Markdown Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. ###Code # run this cell to create a database and a table, called gdp, to hold the gdp data # You do not need to change anything in this code cell import sqlite3 # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS gdp") # create the test table including project_id as a primary key cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));") conn.commit() conn.close() # Generator for reading in one line at a time # generators are useful for data sets that are too large to fit in RAM # You do not need to change anything in this code cell def extract_lines(file): while True: line = file.readline() if not line: break yield line # TODO: fill out the code wherever you find a TODO in this cell import pandas as pd import numpy as np import sqlite3 # transform the indicator data def transform_indicator_data(data, colnames): # get rid of quote marks for i, datum in enumerate(data): data[i] = datum.replace('"','') country = data[0] # filter out values that are not countries non_countries = ['World', 'High income', 'OECD members', 'Post-demographic dividend', 'IDA & IBRD total', 'Low & middle income', 'Middle income', 'IBRD only', 'East Asia & Pacific', 'Europe & Central Asia', 'North America', 'Upper middle income', 'Late-demographic dividend', 'European Union', 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] if country not in non_countries: data_array = np.array(data, ndmin=2) data_array.reshape(1, 63) df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan) df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1) # Reshape the data sets so that they are in long format df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp') results = [] for index, row in df_melt.iterrows(): country, countrycode, year, gdp = row if str(gdp) != 'nan': results.append([country, countrycode, year, gdp]) return results # TODO: fill out the code wherever you find a TODO in this cell def load_indicator_data(results): conn = sqlite3.connect('worldbank.db') cur = conn.cursor() if results: for result in results: countryname, countrycode, year, gdp = result sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp) # connect to database and execute query try: cur.execute(sql_string) except Exception as e: print('error occurred:', e, result) conn.commit() conn.close() return None # Execute this code cell to run the ETL pipeline # You do not need to change anything in this cell with open('../data/gdp_data.csv') as f: for line in extract_lines(f): data = line.split(',') if len(data) == 63: if data[0] == '"Country Name"': colnames = [] # get rid of quote marks for i, datum in enumerate(data): colnames.append(datum.replace('"','')) else: # transform and load the line of indicator data results = transform_indicator_data(data, colnames) load_indicator_data(results) # Execute this code cell to output the values in the gdp table # You do not need to change anything in this cell # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # create the test table including project_id as a primary key df = pd.read_sql("SELECT * FROM gdp", con=conn) conn.commit() conn.close() df ###Output _____no_output_____ ###Markdown Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. ###Code # run this cell to create a database and a table, called gdp, to hold the gdp data # You do not need to change anything in this code cell import sqlite3 # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS gdp") # create the test table including project_id as a primary key cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));") conn.commit() conn.close() # Generator for reading in one line at a time # generators are useful for data sets that are too large to fit in RAM # You do not need to change anything in this code cell def extract_lines(file): while True: line = file.readline() if not line: break yield line # TODO: fill out the code wherever you find a TODO in this cell import pandas as pd import numpy as np import sqlite3 # transform the indicator data def transform_indicator_data(data, colnames): # get rid of quote marks for i, datum in enumerate(data): data[i] = datum.replace('"','') country = data[0] # filter out values that are not countries non_countries = ['World', 'High income', 'OECD members', 'Post-demographic dividend', 'IDA & IBRD total', 'Low & middle income', 'Middle income', 'IBRD only', 'East Asia & Pacific', 'Europe & Central Asia', 'North America', 'Upper middle income', 'Late-demographic dividend', 'European Union', 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] if country not in non_countries: data_array = np.array(data, ndmin=2) data_array.reshape(1, 63) df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan) df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1) # Reshape the data sets so that they are in long format df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp') results = [] for index, row in df_melt.iterrows(): country, countrycode, year, gdp = row if str(gdp) != 'nan': results.append([country, countrycode, year, gdp]) return results # TODO: fill out the code wherever you find a TODO in this cell def load_indicator_data(results): conn = sqlite3.connect('worldbank.db') cur = conn.cursor() if results: for result in results: countryname, countrycode, year, gdp = result sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp) # connect to database and execute query try: cur.execute(sql_string) except Exception as e: print('error occurred:', e, result) conn.commit() conn.close() return None # Execute this code cell to run the ETL pipeline # You do not need to change anything in this cell with open('../data/gdp_data.csv') as f: for line in extract_lines(f): data = line.split(',') if len(data) == 63: if data[0] == '"Country Name"': colnames = [] # get rid of quote marks for i, datum in enumerate(data): colnames.append(datum.replace('"','')) else: # transform and load the line of indicator data results = transform_indicator_data(data, colnames) load_indicator_data(results) # Execute this code cell to output the values in the gdp table # You do not need to change anything in this cell # connect to the database # the database file will be worldbank.db # note that sqlite3 will create this database file if it does not exist already conn = sqlite3.connect('worldbank.db') # get a cursor cur = conn.cursor() # create the test table including project_id as a primary key df = pd.read_sql("SELECT * FROM gdp", con=conn) conn.commit() conn.close() df ###Output _____no_output_____
pyspark_sql.ipynb
###Markdown Pandas vs. Pyspark SQL |Function|Pandas|Pyspark SQL||--|--|--||Import|import pandas as pd|import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate()||Read csv|df = pd.read_csv(filepath, header=)|df = spark.read.csv(filepath, header=)||View df|df.head(n)|df.take(n): return list of Row objects df.collect(): get all of the data from DataFrame, be careful to use this for large dataset, can easily crash the driver node df.show(): print out dataframe in a nice format df.limit(n): return a new DataFramedf.head(n): return list, similar to take function||Schema|df.types|df.printSchema() df.types||Access a column|df.column1 df['column1']|df.column1 df['column1'] df.select(col('column1'))||Select multiple columns|df[['column1', 'column2']]| df.select('column1', 'column2').show(3)||Add a column|df['NewColumn'] = 2\*df['Column']|df.withColumn('DoubleColumn',2\*df['Acolumn'])||Rename a column|df.rename(columns={'ExisingColumnName':'NewColumnName'})|df.withColumnRenamed(ExistingColumnName, NewColumnName)||Remove columns|df.drop('ColumnName', axis=1, inplace=True)|df.drop('ColumnName') df.drop('ColumnName1', 'ColumnName2', 'ColumnName3'): doesn't need to put column names in a list||Group by|df.groupby('Column')|df.groupBy('Column')||Filter rows|df[df.column > 1]| df.filter(col('column') > 1)||Get unique rows|df.column.unique()|df.select(column).distinct().show()||Sort rows|df.column.sort_values(by=)|df.orderBy('column')||Append rows\/dfs|pd.concat([df, df1], axis=0)|df.union(df1)||Length of df|len(df) df.shape[0]|df.count()||Get count of unique values in a column|df['column'].value_counts()|df.groupBy('column').count().show()||Built-in function (eg. mean)|df.column1.mean()|from pyspark.sql.functions import mean df.select(mean(df.column1)).show()| Download and install Spark ###Code !ls !apt-get update !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q http://archive.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz !tar xf spark-2.3.1-bin-hadoop2.7.tgz !pip install -q findspark ###Output 0% [Working] Get:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B] 0% [Connecting to archive.ubuntu.com (91.189.88.142)] [Connecting to security.u 0% [Connecting to archive.ubuntu.com (91.189.88.142)] [Connecting to security.u 0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.142) Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] 0% [1 InRelease gpgv 3,626 B] [Waiting for headers] [2 InRelease 14.2 kB/88.7 k Ign:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease 0% [1 InRelease gpgv 3,626 B] [Waiting for headers] [2 InRelease 20.0 kB/88.7 k 0% [1 InRelease gpgv 3,626 B] [Waiting for headers] [Waiting for headers] [Wait Hit:4 http://archive.ubuntu.com/ubuntu bionic InRelease 0% [1 InRelease gpgv 3,626 B] [Waiting for headers] [Waiting for headers] [Wait Get:5 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB] Ign:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Get:7 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [697 B] Hit:8 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Get:9 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B] Get:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Hit:11 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease Get:12 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ Packages [56.8 kB] Get:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Hit:14 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease Get:15 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB] Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,411 kB] Get:17 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,150 kB] Get:18 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [423 kB] Ign:20 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages Get:20 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [772 kB] Get:21 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,761 kB] Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [452 kB] Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,183 kB] Get:24 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [901 kB] Get:25 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,582 kB] Get:26 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [41.5 kB] Fetched 13.0 MB in 4s (3,103 kB/s) Reading package lists... Done ###Markdown Setup environment ###Code import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-hadoop2.7" import findspark findspark.init() from pyspark import SparkContext sc = SparkContext.getOrCreate() import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() spark ###Output _____no_output_____ ###Markdown Downloading and preprocessing Chicago's Reported Crime Data ###Code !wget https://data.cityofchicago.org/api/views/ijzp-q8t2/rows.csv?accessType=DOWNLOAD !mv rows.csv\?accessType\=DOWNLOAD reported-crimes.csv !ls from pyspark.sql.functions import to_timestamp,col,lit rc = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a')).filter(col('Date') <= lit('2018-11-11')) rc.show(5) ###Output +--------+-----------+-------------------+--------------------+----+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ | ID|Case Number| Date| Block|IUCR|Primary Type| Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year| Updated On| Latitude| Longitude| Location| +--------+-----------+-------------------+--------------------+----+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ |10224738| HY411648|2015-09-05 13:30:00| 043XX S WOOD ST|0486| BATTERY|DOMESTIC BATTERY ...| RESIDENCE| false| true|0924| 009| 12| 61| 08B| 1165074| 1875917|2015|02/10/2018 03:50:...|41.815117282|-87.669999562|(41.815117282, -8...| |10224739| HY411615|2015-09-04 11:30:00| 008XX N CENTRAL AVE|0870| THEFT| POCKET-PICKING| CTA BUS| false| false|1511| 015| 29| 25| 06| 1138875| 1904869|2015|02/10/2018 03:50:...|41.895080471|-87.765400451|(41.895080471, -8...| |11646166| JC213529|2018-09-01 00:01:00|082XX S INGLESIDE...|0810| THEFT| OVER $500| RESIDENCE| false| true|0631| 006| 8| 44| 06| null| null|2018|04/06/2019 04:04:...| null| null| null| |10224740| HY411595|2015-09-05 12:45:00| 035XX W BARRY AVE|2023| NARCOTICS|POSS: HEROIN(BRN/...| SIDEWALK| true| false|1412| 014| 35| 21| 18| 1152037| 1920384|2015|02/10/2018 03:50:...|41.937405765|-87.716649687|(41.937405765, -8...| |10224741| HY411610|2015-09-05 13:00:00| 0000X N LARAMIE AVE|0560| ASSAULT| SIMPLE| APARTMENT| false| true|1522| 015| 28| 25| 08A| 1141706| 1900086|2015|02/10/2018 03:50:...|41.881903443|-87.755121152|(41.881903443, -8...| +--------+-----------+-------------------+--------------------+----+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ only showing top 5 rows ###Markdown Schema ###Code rc.printSchema() rc.columns # declare schema from pyspark.sql.types import StructType, StructField, StringType, TimestampType, BooleanType, IntegerType, DoubleType labels = [('ID', StringType()), ('Case Number', StringType()), ('Date', TimestampType()), ('Block', StringType()), ('IUCR', StringType()), ('Primary Type', StringType()), ('Description', StringType()), ('Location Description', StringType()), ('Arrest', StringType()), ('Domestic', BooleanType()), ('Beat', StringType()), ('District', StringType()), ('Ward', StringType()), ('Community Area', StringType()), ('FBI Code', StringType()), ('X Coordinate', StringType()), ('Y Coordinate', StringType()), ('Year', IntegerType()), ('Updated On', StringType()), ('Latitude', DoubleType()), ('Longitude', StringType()), ('Location', StringType()) ] schema = StructType([StructField(x[0], x[1], True) for x in labels]) ###Output _____no_output_____ ###Markdown Working with columns ###Code rc.select('IUCR').show(5) # rc.select(col('IUCR')).show(5) rc.select('Case Number', 'Date', 'Arrest').show() # add a column named One with entries all 1 from pyspark.sql.functions import lit rc.withColumn('One', lit(1)).show(5) # drop column rc = rc.drop('IUCR') rc.show(5) ###Output +--------+-----------+-------------------+--------------------+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ | ID|Case Number| Date| Block|Primary Type| Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year| Updated On| Latitude| Longitude| Location| +--------+-----------+-------------------+--------------------+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ |10224738| HY411648|2015-09-05 13:30:00| 043XX S WOOD ST| BATTERY|DOMESTIC BATTERY ...| RESIDENCE| false| true|0924| 009| 12| 61| 08B| 1165074| 1875917|2015|02/10/2018 03:50:...|41.815117282|-87.669999562|(41.815117282, -8...| |10224739| HY411615|2015-09-04 11:30:00| 008XX N CENTRAL AVE| THEFT| POCKET-PICKING| CTA BUS| false| false|1511| 015| 29| 25| 06| 1138875| 1904869|2015|02/10/2018 03:50:...|41.895080471|-87.765400451|(41.895080471, -8...| |11646166| JC213529|2018-09-01 00:01:00|082XX S INGLESIDE...| THEFT| OVER $500| RESIDENCE| false| true|0631| 006| 8| 44| 06| null| null|2018|04/06/2019 04:04:...| null| null| null| |10224740| HY411595|2015-09-05 12:45:00| 035XX W BARRY AVE| NARCOTICS|POSS: HEROIN(BRN/...| SIDEWALK| true| false|1412| 014| 35| 21| 18| 1152037| 1920384|2015|02/10/2018 03:50:...|41.937405765|-87.716649687|(41.937405765, -8...| |10224741| HY411610|2015-09-05 13:00:00| 0000X N LARAMIE AVE| ASSAULT| SIMPLE| APARTMENT| false| true|1522| 015| 28| 25| 08A| 1141706| 1900086|2015|02/10/2018 03:50:...|41.881903443|-87.755121152|(41.881903443, -8...| +--------+-----------+-------------------+--------------------+------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+ only showing top 5 rows ###Markdown Working with rows ###Code # add new rows one_day = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a')).filter(col('Date') == lit('2018-11-12')).drop('IUCR') one_day.count() rc.union(one_day).orderBy('Date', ascending=False).show(5) rc.groupBy('Primary Type').count().orderBy('count', ascending=False).show() ###Output +--------------------+-------+ | Primary Type| count| +--------------------+-------+ | THEFT|1418449| | BATTERY|1232245| | CRIMINAL DAMAGE| 771501| | NARCOTICS| 711748| | OTHER OFFENSE| 418868| | ASSAULT| 418512| | BURGLARY| 388037| | MOTOR VEHICLE THEFT| 314131| | DECEPTIVE PRACTICE| 266347| | ROBBERY| 255598| | CRIMINAL TRESPASS| 193371| | WEAPONS VIOLATION| 70662| | PROSTITUTION| 68329| |PUBLIC PEACE VIOL...| 47785| |OFFENSE INVOLVING...| 46094| | CRIM SEXUAL ASSAULT| 26733| | SEX OFFENSE| 25409| |INTERFERENCE WITH...| 15140| | GAMBLING| 14422| |LIQUOR LAW VIOLATION| 14068| +--------------------+-------+ only showing top 20 rows ###Markdown what percentage of reported crimes resulted in an arrest ###Code # check distinct values in Arrest column rc.select('Arrest').distinct().show() rc.filter(col('Arrest') == 'true').count() / rc.count() ###Output _____no_output_____ ###Markdown waht are the top 3 locations for reported crimes ###Code rc.groupBy('Location Description').count().orderBy('count', ascending=False).show(3) ###Output +--------------------+-------+ |Location Description| count| +--------------------+-------+ | STREET|1770581| | RESIDENCE|1145281| | APARTMENT| 698509| +--------------------+-------+ only showing top 3 rows ###Markdown Built-in functions ###Code from pyspark.sql import functions # list all built-in functions in functions module print(dir(functions)) help(functions.substring) from pyspark.sql.functions import lower, upper, substring rc.select(lower(col('Primary Type')), upper(col('Primary Type')), substring(col('Primary Type'), 1, 4)).show(5) from pyspark.sql.functions import min, max rc.select(min(col('Date')), max(col('Date'))).show(1) from pyspark.sql.functions import date_add, date_sub rc.select(date_sub(min(col('Date')), 3), date_add(max(col('Date')), 3)).show(1) ###Output +----------------------+----------------------+ |date_sub(min(Date), 3)|date_add(max(Date), 3)| +----------------------+----------------------+ | 2000-12-29| 2018-11-13| +----------------------+----------------------+ ###Markdown Dates and timestamps ###Code from pyspark.sql.functions import to_date, to_timestamp, lit df = spark.createDataFrame([('2019-12-25 13:30:00', )], ['Christmas']) df.show(1) df.select(to_date(col('Christmas'), 'yyyy-MM-dd HH:mm:ss'), to_timestamp(col('Christmas'), 'yyyy-MM-dd HH:mm:ss')).show(1) df = spark.createDataFrame([('25/Dec/2019 13:30:00', )], ['Christmas']) df.show(1) df.select(to_date(col('Christmas'), 'dd/MMM/yyyy HH:mm:ss'), to_timestamp(col('Christmas'), 'dd/MMM/yyyy HH:mm:ss')).show(1) df = spark.createDataFrame([('12/25/2019 01:30:00 PM', )], ['Christmas']) df.show(1, truncate=False) df.select(to_date(col('Christmas'), 'MM/dd/yyyy hh:mm:ss aa'), to_timestamp(col('Christmas'), 'MM/dd/yyyy hh:mm:ss aa')).show(1) ###Output +----------------------------------------------+---------------------------------------------------+ |to_date(`Christmas`, 'MM/dd/yyyy hh:mm:ss aa')|to_timestamp(`Christmas`, 'MM/dd/yyyy hh:mm:ss aa')| +----------------------------------------------+---------------------------------------------------+ | 2019-12-25| 2019-12-25 13:30:00| +----------------------------------------------+---------------------------------------------------+ ###Markdown Join ###Code !wget -O police-station.csv https://data.cityofchicago.org/api/views/z8bn-74gv/rows.csv?accessType=DOWNLOAD ps = spark.read.csv('police-station.csv', header=True) ps.show(5) rc.cache() rc.count() ps.select('DISTRICT').distinct().show(30) rc.select('District').distinct().show(30) from pyspark.sql.functions import lpad lpad? ps.select(lpad(col('DISTRICT'), 3, '0')).show(10) ps = ps.withColumn('Format_district', lpad(col('DISTRICT'), 3, '0')) ps.show(5) rc.join(ps, rc.District == ps.Format_district, 'left_outer').show() ps.columns rc.join(ps, rc.District == ps.Format_district, 'left_outer').drop( 'ADDRESS', 'CITY', 'STATE', 'ZIP', 'WEBSITE', 'PHONE', 'FAX', 'TTY', 'X COORDINATE', 'Y COORDINATE', 'LATITUDE', 'LONGITUDE', 'LOCATION').show() ###Output +--------+-----------+-------------------+--------------------+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+----+--------------------+--------+--------------+---------------+ | ID|Case Number| Date| Block| Primary Type| Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|Year| Updated On|DISTRICT| DISTRICT NAME|Format_district| +--------+-----------+-------------------+--------------------+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+----+--------------------+--------+--------------+---------------+ |10224738| HY411648|2015-09-05 13:30:00| 043XX S WOOD ST| BATTERY|DOMESTIC BATTERY ...| RESIDENCE| false| true|0924| 009| 12| 61| 08B|2015|02/10/2018 03:50:...| 9| Deering| 009| |10224739| HY411615|2015-09-04 11:30:00| 008XX N CENTRAL AVE| THEFT| POCKET-PICKING| CTA BUS| false| false|1511| 015| 29| 25| 06|2015|02/10/2018 03:50:...| 15| Austin| 015| |11646166| JC213529|2018-09-01 00:01:00|082XX S INGLESIDE...| THEFT| OVER $500| RESIDENCE| false| true|0631| 006| 8| 44| 06|2018|04/06/2019 04:04:...| 6| Gresham| 006| |10224740| HY411595|2015-09-05 12:45:00| 035XX W BARRY AVE| NARCOTICS|POSS: HEROIN(BRN/...| SIDEWALK| true| false|1412| 014| 35| 21| 18|2015|02/10/2018 03:50:...| 14| Shakespeare| 014| |10224741| HY411610|2015-09-05 13:00:00| 0000X N LARAMIE AVE| ASSAULT| SIMPLE| APARTMENT| false| true|1522| 015| 28| 25| 08A|2015|02/10/2018 03:50:...| 15| Austin| 015| |10224742| HY411435|2015-09-05 10:55:00| 082XX S LOOMIS BLVD| BURGLARY| FORCIBLE ENTRY| RESIDENCE| false| false|0614| 006| 21| 71| 05|2015|02/10/2018 03:50:...| 6| Gresham| 006| |10224743| HY411629|2015-09-04 18:00:00|021XX W CHURCHILL ST| BURGLARY| UNLAWFUL ENTRY| RESIDENCE-GARAGE| false| false|1434| 014| 32| 24| 05|2015|02/10/2018 03:50:...| 14| Shakespeare| 014| |10224744| HY411605|2015-09-05 13:00:00| 025XX W CERMAK RD| THEFT| RETAIL THEFT| GROCERY FOOD STORE| true| false|1034| 010| 25| 31| 06|2015|09/17/2015 11:37:...| 10| Ogden| 010| |10224745| HY411654|2015-09-05 11:30:00|031XX W WASHINGTO...| ROBBERY|STRONGARM - NO WE...| STREET| false| true|1222| 012| 27| 27| 03|2015|02/10/2018 03:50:...| 12| Near West| 012| |11645836| JC212333|2016-05-01 00:25:00| 055XX S ROCKWELL ST|DECEPTIVE PRACTICE|FINANCIAL IDENTIT...| null| false| false|0824| 008| 15| 63| 11|2016|04/06/2019 04:04:...| 8| Chicago Lawn| 008| |10224746| HY411662|2015-09-05 14:00:00| 071XX S PULASKI RD| THEFT| $500 AND UNDER|PARKING LOT/GARAG...| false| false|0833| 008| 13| 65| 06|2015|02/10/2018 03:50:...| 8| Chicago Lawn| 008| |10224749| HY411626|2015-09-05 11:00:00|052XX N MILWAUKEE...| BATTERY| SIMPLE| SMALL RETAIL STORE| false| false|1623| 016| 45| 11| 08B|2015|02/10/2018 03:50:...| 16|Jefferson Park| 016| |10224750| HY411632|2015-09-05 03:00:00| 0000X W 103RD ST| OTHER OFFENSE| TELEPHONE THREAT| APARTMENT| false| true|0512| 005| 34| 49| 26|2015|02/10/2018 03:50:...| 5| Calumet| 005| |10224751| HY411566|2015-09-05 12:50:00| 013XX E 47TH ST| BATTERY|DOMESTIC BATTERY ...| STREET| false| true|0222| 002| 4| 39| 08B|2015|02/10/2018 03:50:...| 2| Wentworth| 002| |10224752| HY411601|2015-09-03 13:00:00| 020XX W SCHILLER ST| THEFT| OVER $500| STREET| false| false|1424| 014| 1| 24| 06|2015|02/10/2018 03:50:...| 14| Shakespeare| 014| |10224753| HY411489|2015-09-05 11:45:00| 080XX S JUSTINE ST| BATTERY|AGGRAVATED DOMEST...| APARTMENT| false| false|0612| 006| 21| 71| 04B|2015|02/10/2018 03:50:...| 6| Gresham| 006| |10224754| HY411656|2015-09-05 13:30:00|007XX N LEAMINGTO...| CRIMINAL DAMAGE| TO VEHICLE| STREET| false| false|1531| 015| 28| 25| 14|2015|02/10/2018 03:50:...| 15| Austin| 015| |10224756| HY410094|2015-07-08 00:00:00|103XX S TORRENCE AVE| BURGLARY| UNLAWFUL ENTRY| OTHER| false| false|0434| 004| 10| 51| 05|2015|02/10/2018 03:50:...| 4| South Chicago| 004| |10224757| HY411388|2015-09-05 09:55:00| 088XX S PAULINA ST| BURGLARY| FORCIBLE ENTRY| RESIDENCE| true| false|2221| 022| 21| 71| 05|2015|02/10/2018 03:50:...| 22| Morgan Park| 022| |10224758| HY411568|2015-09-05 12:35:00| 059XX W GRACE ST| BATTERY|DOMESTIC BATTERY ...| STREET| false| true|1633| 016| 38| 15| 08B|2015|02/10/2018 03:50:...| 16|Jefferson Park| 016| +--------+-----------+-------------------+--------------------+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+----+--------------------+--------+--------------+---------------+ only showing top 20 rows ###Markdown what is the most frequently reported non-criminal activity ###Code rc.select('Primary Type').distinct().orderBy('Primary Type').show(35, truncate=False) nc = rc.filter((col('Primary Type') == 'NON - CRIMINAL') | (col('Primary Type') == 'NON-CRIMINAL') | (col('Primary Type') == 'NON-CRIMINAL (SUBJECT SPECIFIED)')) nc.show(50) nc.groupBy('Description').count().orderBy('count', ascending=False).show(5, truncate=False) ###Output +-------------------------------------------+-----+ |Description |count| +-------------------------------------------+-----+ |LOST PASSPORT |107 | |FOID - REVOCATION |75 | |NOTIFICATION OF CIVIL NO CONTACT ORDER |9 | |NOTIFICATION OF STALKING - NO CONTACT ORDER|8 | |FOUND PASSPORT |4 | +-------------------------------------------+-----+ only showing top 5 rows ###Markdown which day of the week has the most number of reported crime ###Code from pyspark.sql.functions import dayofweek from pyspark.sql.functions import date_format rc.select('Date', dayofweek('Date'), date_format('Date', 'E')).show(5) rc.groupBy(date_format('Date', 'E')).count().orderBy('count', ascending=False).show(5) rc.groupBy(date_format('Date', 'E')).count().collect() dow = [x[0] for x in rc.groupBy(date_format('Date', 'E')).count().collect()] cnt = [x[1] for x in rc.groupBy(date_format('Date', 'E')).count().collect()] import pandas as pd import matplotlib.pyplot as plt cp = pd.DataFrame({'Day_of_week':dow, 'Count': cnt}) cp.head() cp.sort_values('Count',ascending=False).plot(kind='bar', color='olive', x='Day_of_week', y='Count') plt.xlabel('Day of the week') plt.ylabel('No. of reported crimes') plt.title('No. of reported crimes per day of the week from 2001 to present'); ###Output _____no_output_____ ###Markdown RDDs ###Code psrdd = sc.textFile('police-station.csv') ps_header = psrdd.first() #header ps_rest = psrdd.filter(lambda line:line!= ps_header) ps_rest.first() ###Output _____no_output_____ ###Markdown how many policce stations are there ###Code ps_rest.map(lambda line: line.split(',')).count() ###Output _____no_output_____ ###Markdown display district ID, name, address, zip for police station with district id 7 ###Code (ps_rest.filter(lambda line: line.split(',')[0] == '7').map(lambda line: (line.split(',')[0], line.split(',')[1], line.split(',')[2], line.split(',')[5])).collect()) ###Output _____no_output_____ ###Markdown display district ID, name, address, zip for police station with district id 10 and 11 ###Code (ps_rest.filter(lambda line:line.split(',')[0] in ['10', '11']).map(lambda line: (line.split(',')[0], line.split(',')[1], line.split(',')[2], line.split(',')[5])).collect()) ###Output _____no_output_____
queue_imbalance/svm/svm_rbf_std_stocks.ipynb
###Markdown SVM with rbf kernelThe goal of this notebook is to find the best parameters for polynomial kernel. We also want to check if the parameters depend on stock.We will use [sklearn.svm](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC) library to perform calculations. We want to pick the best parameters for **SVC**:* C (default 1.0)* gamma (default 1/number_of_features, so 1 in our case)Kernel function looks like this: $\exp(-\gamma \|x-x'\|^2)$. $\gamma$ is specified by keyword **gamma**, must be greater than 0. ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as md from statsmodels.distributions.empirical_distribution import ECDF import numpy as np import seaborn as sns from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve from sklearn.metrics import classification_report from sklearn import svm import warnings from lob_data_utils import lob sns.set_style('whitegrid') warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown DataWe use data from 5 stocks (from dates 2013-09-01 - 2013-11-16) for which logistic regression yielded the best results.We selected 3 subsets for each stock:* training set (60% of data)* test set (20% of data)* cross-validation set (20% of data) ###Code stocks = ['9061', '9062', '9063', '9064', '9065'] dfs = {} dfs_cv = {} dfs_test = {} for s in stocks: df, df_cv, df_test = lob.load_data(s, cv=True) dfs[s] = df dfs_cv[s] = df_cv dfs_test[s] = df_test dfs[stocks[0]].head(5) def svm_classification(d, kernel, gamma='auto', C=1.0): clf = svm.SVC(kernel=kernel, gamma=gamma, C=C) X = d['queue_imbalance'].values.reshape(-1, 1) y = d['mid_price_indicator'].values.reshape(-1, 1) clf.fit(X, y) return clf ###Output _____no_output_____ ###Markdown MethodologyWe will use at first naive approach to grasp how each of the parameter influences the ROC area score and what values make sense, when the other parameters are set to defaults.After that we will try to get the best combination of the parameters. C parameterThe C parameter has influence over margin picked by SVM:* for large values of **C** SVM will choose a smaller-margin hyperplane, which means that more data points will be classified correctly* for small values of **C** SVM will choose a bigger-margin hyperplane, so there may be more misclassificationsAt first we tried parameters: [0.0001, 0.001, 0.01, 0.1, 1, 10, 1000], but after first calculations it seems that it wasn't enough, so a few more values were introduced or removed. ###Code cs = [0.01, 0.1, 0.75, 1, 1.5, 5, 10, 15, 100, 110, 150] df_css = {} ax = plt.subplot() ax.set_xscale("log", basex=10) for s in stocks: df_cs = pd.DataFrame(index=cs) df_cs['roc'] = np.zeros(len(df_cs)) for c in cs: reg_svm = svm_classification(dfs[s], 'rbf', C=c) pred_svm_out_of_sample = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], pred_svm_out_of_sample) df_cs.loc[c] = logit_roc_auc plt.plot(df_cs, linestyle='--', label=s, marker='x', alpha=0.5) df_css[s] = df_cs plt.legend() ###Output _____no_output_____ ###Markdown Best values of C parameterThere is no rule, how to set this parameter - for stock **11618** the value is very large, for the rest it is rather small. ###Code for s in stocks: idx = df_css[s]['roc'].idxmax() print('For {} the best is {}'.format(s, idx)) ###Output For 9061 the best is 0.01 For 9062 the best is 110.0 For 9063 the best is 150.0 For 9064 the best is 1.0 For 9065 the best is 15.0 ###Markdown Influence of C parameterThe score difference between SVM with the worst choice of parameter **C** and the best choice one is shown on the output below. For scoring method we used *roc_area*. For two stocks **10795** and **12098** it can affect the prediction by 0.1, for the rest the difference is less. ###Code for s in stocks: err_max = df_css[s]['roc'].max() err_min = df_css[s]['roc'].min() print('For {} the diff between best and worst {}'.format(s, err_max - err_min)) ###Output For 9061 the diff between best and worst 0.002685885106325836 For 9062 the diff between best and worst 0.002481934220301296 For 9063 the diff between best and worst 0.0063512329122638045 For 9064 the diff between best and worst 0.004129902007859343 For 9065 the diff between best and worst 0.0045935077218552944 ###Markdown GammaGamma is a parameter which has influence over decision region - the bigger it is, the bigger influence every single row of data has. When gamma is low the decision region is very broad. When gamma is high it can even create islands of decision-boundaries around data points. ###Code gammas = [0.0001, 0.001, 0.005, 0.05, 0.1, 0.25, 1, 10, 15] df_gammas = {} ax = plt.subplot() ax.set_xscale("log", basex=10) for s in stocks: df_gamma = pd.DataFrame(index=gammas) df_gamma['roc'] = np.zeros(len(df_gamma)) for g in gammas: reg_svm = svm_classification(dfs[s], 'rbf', gamma=g) pred_svm_out_of_sample = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], pred_svm_out_of_sample) df_gamma.loc[g] = logit_roc_auc plt.plot(df_gamma, linestyle='--', label=s, marker='x', alpha=0.7) df_gammas[s] = df_gamma plt.legend() ###Output _____no_output_____ ###Markdown Best values of gammaThere is no rule, how to set this parameter - for stock **2051** the value is very large, for the rest it is rather small. ###Code for s in stocks: idx = df_gammas[s]['roc'].idxmax() print('For {} the best is {}'.format(s, idx)) ###Output For 9061 the best is 0.001 For 9062 the best is 1.0 For 9063 the best is 10.0 For 9064 the best is 15.0 For 9065 the best is 0.1 ###Markdown Influence of gammaThe score difference between SVM with the worst choice of **gamma** and the best choice one is shown on the output below. For scoring method we used *roc_area*. For all stocks the error difference is small - less than 0.04. ###Code for s in stocks: err_max = df_gammas[s]['roc'].max() err_min = df_gammas[s]['roc'].min() print('For {} the diff between best and worst {}'.format(s, err_max - err_min)) ###Output For 9061 the diff between best and worst 0.0509107630435659 For 9062 the diff between best and worst 0.022079928412237493 For 9063 the diff between best and worst 0.047251831251079124 For 9064 the diff between best and worst 0.05081285588253737 For 9065 the diff between best and worst 0.06321980778400627 ###Markdown ResultsWe compare results of the SVMs with the best choices of parameters against the logistic regression and SVM with defaults.We will use two approaches for choosing parameters:* naive - for each stock we will just pick the best values we found in the previous section* grid - we will caluclate roc_area error for every combination of parameters used in previous section (computionally heavy).We could also use GridSearchCV from sklearn library, but the issue with it is supplying the cross-validation set (it has to be continous in time). In the future we need to implement the method for that. Naive approachWe pick the best **C** parameter and the best **gamma** separately from the results of [section above](Methodology), which were obtained using cross-validation set.For two stocks **12098** and **11618** the roc_area scores are better, for the rest it's slightly worse for testing set. So this approach doesn't work so well. ###Code df_results = pd.DataFrame(index=stocks) df_results['logistic'] = np.zeros(len(stocks)) df_results['rbf-naive'] = np.zeros(len(stocks)) df_results['rbf-default'] = np.zeros(len(stocks)) plt.subplot(121) for s in stocks: reg_svm = svm_classification(dfs[s], 'rbf', C=df_css[s]['roc'].idxmax(), gamma=df_gammas[s]['roc'].idxmax()) prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], prediction) df_results['rbf-naive'][s] = logit_roc_auc fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, prediction) plt.plot(fpr, tpr, label='{} (area = {})'.format(s, logit_roc_auc)) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set with the best C and gamma params') plt.legend(loc="lower right") colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'] plt.subplot(122) for s in stocks: reg_svm = svm_classification(dfs[s], 'rbf') prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], prediction) df_results['rbf-default'][s] = logit_roc_auc fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, prediction) plt.plot(fpr, tpr, label='{} (area = {})'.format(s, logit_roc_auc)) reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s])) pred_log = reg_log.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], pred_log) df_results['logistic'][s] = logit_roc_auc fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, pred_log) plt.plot(fpr, tpr, c=colors[stocks.index(s)], linestyle='--', label='{} (area = {})'.format(s + ' logisitc ', logit_roc_auc), alpha=0.5) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set with defaults and logistic regression') plt.legend(loc="lower right") plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2) df_results ###Output _____no_output_____ ###Markdown Grid approachWe iterate over all combinations of parameters **C** and **gamma**.This approach works usually better, but not for all cases. ###Code best_c = {} best_g = {} best_score = {} for s in stocks: print(s) best_score[s] = 0 best_c[s] = -1 best_g[s] = -1 for c in cs: for g in gammas: reg_svm = svm_classification(dfs[s], 'rbf', C=c, gamma=g) prediction = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], prediction) if logit_roc_auc > best_score[s]: best_c[s] = c best_g[s] = g best_score[s] = logit_roc_auc ###Output 9061 9062 9063 9064 9065 ###Markdown Best parameters for grid approach ###Code print('stock', '\t', 'C', '\t', 'gamma', '\t', 'best score') for s in stocks: print(s, '\t', best_c[s], '\t', best_g[s], '\t', best_score[s]) df_results['rbf-grid'] = np.zeros(len(stocks)) plt.subplot(121) for s in stocks: reg_svm = svm_classification(dfs[s], 'rbf', C=best_c[s], gamma=best_g[s]) prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], prediction) df_results['rbf-grid'][s] = logit_roc_auc fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, prediction) plt.plot(fpr, tpr, label='{} (area = {})'.format(s, logit_roc_auc)) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set with the best C and gamma params') plt.legend(loc="lower right") colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'] plt.subplot(122) for s in stocks: reg_svm = svm_classification(dfs[s], 'rbf') prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], prediction) fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, prediction) plt.plot(fpr, tpr, label='{} (area = {})'.format(s, logit_roc_auc)) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set with defaults') plt.legend(loc="lower right") plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2) plt.subplot(121) for s in stocks: reg_svm = svm_classification(dfs[s], 'rbf', C=best_c[s], gamma=best_g[s]) prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], prediction) df_results['rbf-grid'][s] = logit_roc_auc fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, prediction) plt.plot(fpr, tpr, label='{} (area = {})'.format(s, logit_roc_auc)) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set with the best C and gamma params') plt.legend(loc="lower right") colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'] plt.subplot(122) for s in stocks: reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s])) pred_log = reg_log.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1)) logit_roc_auc = roc_auc_score(dfs_test[s]['mid_price_indicator'], pred_log) fpr, tpr, thresholds = roc_curve(dfs_test[s]['mid_price_indicator'].values, pred_log) plt.plot(fpr, tpr, c=colors[stocks.index(s)], linestyle='--', label='{} (area = {})'.format(s + ' logisitc ', logit_roc_auc), alpha=0.5) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for test set for logistic regression') plt.legend(loc="lower right") plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2) df_results ###Output _____no_output_____
StageB/stageB.ipynb
###Markdown Solutions to Question 13 through Question 20 ###Code #Initiate the required dataframe p = tfile.drop(['date', 'lights'], axis = 1) p.head() from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() normalised_p = pds.DataFrame(scaler.fit_transform(p), columns=p.columns) features_p = normalised_p.drop(columns=['Appliances']) target = normalised_p[ 'Appliances'] #Split the data into training sets and testing sets from sklearn.model_selection import train_test_split p_train, p_test, y_train, y_test = train_test_split(features_p, target, test_size=0.3 ,random_state= 42) #initiate model, fit data and make predictions p_model = LinearRegression() p_model.fit(p_train, y_train) p_predicted = p_model.predict(p_test) # Question 13 Solution from sklearn.metrics import mean_absolute_error p_mae = mean_absolute_error(y_test, p_predicted) round(p_mae, 2) # Question 14 solution k = y_test - p_predicted p_rss = round(np.sum(np.square(k)),2) p_rss #Question 15 solution from sklearn.metrics import mean_squared_error mse = mean_squared_error(y_test, p_predicted) p_rmse = round(np.sqrt(mse),3) p_rmse # Question 16 solution coef_det = round(p_model.score(p_test, y_test),2) coef_det # Question 17 solution p_weights = pds.Series(p_model.coef_, p_test.columns).sort_values() p_weights = pds.DataFrame(p_weights).reset_index() p_weights.columns = ["Features", "p_Linear_model_weights"] ans = [(p_weights["Features"][0], p_weights["Features"][len(p_weights)-1])] ans # Question 18 solution from sklearn.linear_model import Ridge p_ridge_model = Ridge(alpha=0.4) p_ridge_model.fit(p_train, y_train) p_ridge_predicted = p_ridge_model.predict(p_test) ridge_mse = mean_squared_error(y_test, p_ridge_predicted) p_ridge_rmse = round(np.sqrt(ridge_mse),3) p_ridge_rmse == p_rmse # Question 19 solution from sklearn.linear_model import Lasso p_lasso_model = Lasso(alpha=0.001) p_lasso_model.fit(p_train, y_train) p_lasso_weights = pds.Series(p_lasso_model.coef_, p_test.columns).sort_values() p_lasso_weights = pds.DataFrame(p_lasso_weights).reset_index() p_lasso_weights.columns = ["Lasso_Features", "p_Lasso_model_weights"] (p_lasso_weights.p_Lasso_model_weights != 0).sum() # Question 20 solution p_lasso_predicted = p_lasso_model.predict(p_test) lasso_mse = mean_squared_error(y_test, p_lasso_predicted) p_lasso_rmse = round(np.sqrt(lasso_mse),3) p_lasso_rmse ###Output _____no_output_____
ipython/kinetics_library_to_training.ipynb
###Markdown Convert Kinetics Library to Training Reactions ScriptSpecify the kinetics library name below and run the script. It automatically overwrites the training reactions files it needs to. Then you should commit those files.This script only trains safely. In other words, if a single match from an RMG family is found, a training reaction is created. Sometimes, there are no matches from RMG reaction families, or multiple matches. This indicates an error that requires manual fixing, and a printout is given in the script. ###Code # Set libraries to load reactions from; set to None to load all libraries libraries = ['vinylCPD_H'] # Set families to add training reactions to; either 'all' or a list, e.g. ['R_Addition_MultipleBond'] families = ['Intra_R_Add_Endocyclic'] # Specify whether to plot kinetics comparisons compareKinetics = True # Specify whether to print library reactions which don't fit in the specified families # This can result in a lot of unnecessary output if only using a few families showAll = False # Specify whether to prioritize aromatic resonance structures to reduce cases of multiple matches filterAromatic = True # Specify whether to use verbose comments when averaging tree verboseComments = False from rmgpy import settings from rmgpy.data.rmg import RMGDatabase from kinetics_library_to_training_tools import * ###Output _____no_output_____ ###Markdown Step 1: Load RMG-database with specified libraries and families ###Code database = RMGDatabase() database.load( path = settings['database.directory'], thermoLibraries = ['primaryThermoLibrary'], # Can add others if necessary kineticsFamilies = families, reactionLibraries = libraries, kineticsDepositories = ['training'], ) # If we want accurate kinetics comparison, add existing training reactions and fill tree by averaging if compareKinetics: for family in database.kinetics.families.values(): family.addKineticsRulesFromTrainingSet(thermoDatabase=database.thermo) family.fillKineticsRulesByAveragingUp(verbose=verboseComments) ###Output _____no_output_____ ###Markdown Step 2a: Generate library reactions from families to get proper labels ###Code master_dict, multiple_dict = process_reactions(database, libraries, families, compareKinetics=compareKinetics, showAll=showAll, filterAromatic=filterAromatic) ###Output _____no_output_____ ###Markdown Step 2b (optional): Review and select reactions to be added ###Code review_reactions(master_dict, prompt=True) ###Output _____no_output_____ ###Markdown Step 2c (optional): Manual processing for reactions with multiple matches ###Code manual_selection(master_dict, multiple_dict, database) ###Output _____no_output_____ ###Markdown Step 2d: Final review of reactions to be added ###Code review_reactions(master_dict, prompt=False) ###Output _____no_output_____ ###Markdown Step 3: Write the new training reactions to the database ###Code for library_name, reaction_dict in master_dict.iteritems(): library = database.kinetics.libraries[library_name] for family_name, reaction_list in reaction_dict.iteritems(): print('Adding training reactions from {0} to {1}...'.format(library_name, family_name)) family = database.kinetics.families[family_name] try: depository = family.getTrainingDepository() except: raise Exception('Unable to find training depository in {0}. Check that one exists.'.format(family_name)) print('Training depository previously had {} rxns. Now adding {} new rxn(s).'.format(len(depository.entries), len(reaction_list))) for reaction in reaction_list: # Get the original entry to retrieve metadata orig_entry = library.entries[reaction.index] shortDesc = orig_entry.shortDesc longDesc = 'Training reaction from kinetics library: {0}\nOriginal entry: {1}'.format(library_name, orig_entry.label) if orig_entry.longDesc: longDesc += '\n' + orig_entry.longDesc family.saveTrainingReactions( [reaction], reference=orig_entry.reference, referenceType=orig_entry.referenceType, shortDesc=shortDesc, longDesc=longDesc, ) ###Output _____no_output_____ ###Markdown Convert Kinetics Library to Training Reactions ScriptSpecify the kinetics library name below and run the script. It automatically overwrites the training reactions files it needs to. Then you should commit those files.This script only trains safely. In other words, if a single match from an RMG family is found, a training reaction is created. Sometimes, there are no matches from RMG reaction families, or multiple matches. This indicates an error that requires manual fixing, and a printout is given in the script. ###Code # Set libraries to load reactions from; set to None to load all libraries libraries = ['vinylCPD_H'] # Set families to add training reactions to; either 'all' or a list, e.g. ['R_Addition_MultipleBond'] families = ['Intra_R_Add_Endocyclic'] # Specify whether to plot kinetics comparisons compare_kinetics = True # Specify whether to print library reactions which don't fit in the specified families # This can result in a lot of unnecessary output if only using a few families show_all = False # Specify whether to prioritize aromatic resonance structures to reduce cases of multiple matches filter_aromatic = True # Specify whether to use verbose comments when averaging tree verbose_comments = False from rmgpy import settings from rmgpy.data.rmg import RMGDatabase from kinetics_library_to_training_tools import * ###Output _____no_output_____ ###Markdown Step 1: Load RMG-database with specified libraries and families ###Code database = RMGDatabase() database.load( path = settings['database.directory'], thermo_libraries = ['primaryThermoLibrary'], # Can add others if necessary kinetics_families = families, reaction_libraries = libraries, kinetics_depositories = ['training'], ) # If we want accurate kinetics comparison, add existing training reactions and fill tree by averaging if compare_kinetics: for family in database.kinetics.families.values(): family.add_rules_from_training(thermo_database=database.thermo) family.fill_rules_by_averaging_up(verbose=verbose_comments) ###Output _____no_output_____ ###Markdown Step 2a: Generate library reactions from families to get proper labels ###Code master_dict, multiple_dict = process_reactions(database, libraries, families, compare_kinetics=compare_kinetics, show_all=show_all, filter_aromatic=filter_aromatic) ###Output _____no_output_____ ###Markdown Step 2b (optional): Review and select reactions to be added ###Code review_reactions(master_dict, prompt=True) ###Output _____no_output_____ ###Markdown Step 2c (optional): Manual processing for reactions with multiple matches ###Code manual_selection(master_dict, multiple_dict, database) ###Output _____no_output_____ ###Markdown Step 2d: Final review of reactions to be added ###Code review_reactions(master_dict, prompt=False) ###Output _____no_output_____ ###Markdown Step 3: Write the new training reactions to the database ###Code for library_name, reaction_dict in master_dict.items(): library = database.kinetics.libraries[library_name] for family_name, reaction_list in reaction_dict.items(): print('Adding training reactions from {0} to {1}...'.format(library_name, family_name)) family = database.kinetics.families[family_name] try: depository = family.get_training_depository() except: raise Exception('Unable to find training depository in {0}. Check that one exists.'.format(family_name)) print('Training depository previously had {} rxns. Now adding {} new rxn(s).'.format(len(depository.entries), len(reaction_list))) ref_list = [] type_list = [] short_list = [] long_list = [] for reaction in reaction_list: # Get the original entry to retrieve metadata orig_entry = library.entries[reaction.index] short_desc = orig_entry.short_desc long_desc = 'Training reaction from kinetics library: {0}\nOriginal entry: {1}'.format(library_name, orig_entry.label) if orig_entry.long_desc: long_desc += '\n' + orig_entry.long_desc ref_list.append(orig_entry.reference) type_list.append(orig_entry.reference_type) short_list.append(short_desc) long_list.append(long_desc) family.save_training_reactions( reaction_list, reference=ref_list, reference_type=type_list, short_desc=short_list, long_desc=long_list, ) ###Output _____no_output_____ ###Markdown Convert Kinetics Library to Training Reactions ScriptSpecify the kinetics library name below and run the script. It automatically overwrites the training reactions files it needs to. Then you should commit those files.This script only trains safely. In other words, if a single match from an RMG family is found, a training reaction is created. Sometimes, there are no matches from RMG reaction families, or multiple matches. This indicates an error that requires manual fixing, and a printout is given in the script. ###Code # Set libraries to load reactions from; set to None to load all libraries libraries = ['vinylCPD_H'] # Set families to add training reactions to; either 'default' or a list, e.g. ['R_Addition_MultipleBond'] families = ['Intra_R_Add_Endocyclic'] # Specify whether to plot kinetics comparisons compare_kinetics = True # Specify whether to print library reactions which don't fit in the specified families # This can result in a lot of unnecessary output if only using a few families show_all = False # Specify whether to prioritize aromatic resonance structures to reduce cases of multiple matches filter_aromatic = True # Specify whether to use verbose comments when averaging tree verbose_comments = False from rmgpy import settings from rmgpy.data.rmg import RMGDatabase from kinetics_library_to_training_tools import * ###Output _____no_output_____ ###Markdown Step 1: Load RMG-database with specified libraries and families ###Code database = RMGDatabase() database.load( path = settings['database.directory'], thermo_libraries = ['primaryThermoLibrary'], # Can add others if necessary kinetics_families = families, reaction_libraries = libraries, kinetics_depositories = ['training'], ) # If we want accurate kinetics comparison, add existing training reactions and fill tree by averaging if compare_kinetics: for family in database.kinetics.families.values(): if not family.auto_generated: family.add_rules_from_training(thermo_database=database.thermo) family.fill_rules_by_averaging_up(verbose=verbose_comments) ###Output _____no_output_____ ###Markdown Step 2a: Generate library reactions from families to get proper labels ###Code master_dict, multiple_dict = process_reactions(database, libraries, list(database.kinetics.families.keys()), compare_kinetics=compare_kinetics, show_all=show_all, filter_aromatic=filter_aromatic) ###Output _____no_output_____ ###Markdown Step 2b (optional): Review and select reactions to be added ###Code review_reactions(master_dict, prompt=True) ###Output _____no_output_____ ###Markdown Step 2c (optional): Manual processing for reactions with multiple matches ###Code manual_selection(master_dict, multiple_dict, database) ###Output _____no_output_____ ###Markdown Step 2d: Final review of reactions to be added ###Code review_reactions(master_dict, prompt=False) ###Output _____no_output_____ ###Markdown Step 3: Write the new training reactions to the database ###Code for library_name, reaction_dict in master_dict.items(): library = database.kinetics.libraries[library_name] for family_name, reaction_list in reaction_dict.items(): print('Adding training reactions from {0} to {1}...'.format(library_name, family_name)) family = database.kinetics.families[family_name] try: depository = family.get_training_depository() except: raise Exception('Unable to find training depository in {0}. Check that one exists.'.format(family_name)) print('Training depository previously had {} rxns. Now adding {} new rxn(s).'.format(len(depository.entries), len(reaction_list))) ref_list = [] type_list = [] short_list = [] long_list = [] for reaction in reaction_list: # Get the original entry to retrieve metadata orig_entry = library.entries[reaction.index] short_desc = orig_entry.short_desc long_desc = 'Training reaction from kinetics library: {0}\nOriginal entry: {1}'.format(library_name, orig_entry.label) if orig_entry.long_desc: long_desc += '\n' + orig_entry.long_desc ref_list.append(orig_entry.reference) type_list.append(orig_entry.reference_type) short_list.append(short_desc) long_list.append(long_desc) family.save_training_reactions( reaction_list, reference=ref_list, reference_type=type_list, short_desc=short_list, long_desc=long_list, ) ###Output _____no_output_____ ###Markdown Convert Kinetics Library to Training Reactions ScriptSpecify the kinetics library name below and run the script. It automatically overwrites the training reactions files it needs to. Then you should commit those files.This script only trains safely. In other words, if a single match from an RMG family is found, a training reaction is created. Sometimes, there are no matches from RMG reaction families, or multiple matches. This indicates an error that requires manual fixing, and a printout is given in the script. ###Code # Set libraries to load reactions from; set to None to load all libraries libraries = ['vinylCPD_H'] # Set families to add training reactions to; either 'all' or a list, e.g. ['R_Addition_MultipleBond'] families = ['Intra_R_Add_Endocyclic'] # Specify whether to plot kinetics comparisons compareKinetics = True # Specify whether to print library reactions which don't fit in the specified families # This can result in a lot of unnecessary output if only using a few families showAll = False # Specify whether to prioritize aromatic resonance structures to reduce cases of multiple matches filterAromatic = True # Specify whether to use verbose comments when averaging tree verboseComments = False from rmgpy import settings from rmgpy.data.rmg import RMGDatabase from kinetics_library_to_training_tools import * ###Output _____no_output_____ ###Markdown Step 1: Load RMG-database with specified libraries and families ###Code database = RMGDatabase() database.load( path = settings['database.directory'], thermoLibraries = ['primaryThermoLibrary'], # Can add others if necessary kineticsFamilies = families, reactionLibraries = libraries, kineticsDepositories = ['training'], ) # If we want accurate kinetics comparison, add existing training reactions and fill tree by averaging if compareKinetics: for family in database.kinetics.families.values(): family.addKineticsRulesFromTrainingSet(thermoDatabase=database.thermo) family.fillKineticsRulesByAveragingUp(verbose=verboseComments) ###Output _____no_output_____ ###Markdown Step 2a: Generate library reactions from families to get proper labels ###Code master_dict, multiple_dict = process_reactions(database, libraries, families, compareKinetics=compareKinetics, showAll=showAll, filterAromatic=filterAromatic) ###Output _____no_output_____ ###Markdown Step 2b (optional): Review and select reactions to be added ###Code review_reactions(master_dict, prompt=True) ###Output _____no_output_____ ###Markdown Step 2c (optional): Manual processing for reactions with multiple matches ###Code manual_selection(master_dict, multiple_dict, database) ###Output _____no_output_____ ###Markdown Step 2d: Final review of reactions to be added ###Code review_reactions(master_dict, prompt=False) ###Output _____no_output_____ ###Markdown Step 3: Write the new training reactions to the database ###Code for library_name, reaction_dict in master_dict.iteritems(): library = database.kinetics.libraries[library_name] for family_name, reaction_list in reaction_dict.iteritems(): print('Adding training reactions from {0} to {1}...'.format(library_name, family_name)) family = database.kinetics.families[family_name] try: depository = family.getTrainingDepository() except: raise Exception('Unable to find training depository in {0}. Check that one exists.'.format(family_name)) print('Training depository previously had {} rxns. Now adding {} new rxn(s).'.format(len(depository.entries), len(reaction_list))) ref_list = [] type_list = [] short_list = [] long_list = [] for reaction in reaction_list: # Get the original entry to retrieve metadata orig_entry = library.entries[reaction.index] shortDesc = orig_entry.shortDesc longDesc = 'Training reaction from kinetics library: {0}\nOriginal entry: {1}'.format(library_name, orig_entry.label) if orig_entry.longDesc: longDesc += '\n' + orig_entry.longDesc ref_list.append(orig_entry.reference) type_list.append(orig_entry.referenceType) short_list.append(shortDesc) long_list.append(longDesc) family.saveTrainingReactions( reaction_list, reference=ref_list, referenceType=type_list, shortDesc=short_list, longDesc=long_list, ) ###Output _____no_output_____
Pandas and Numpy/Numpy_Reading.ipynb
###Markdown Speed difference between reading numerical data from plain CSV vs. using _.npy_ file format Dr. Tirthajyoti Sarkar, Fremont, CA 94536 ###Code import numpy as np import time ###Output _____no_output_____ ###Markdown Number of samples ###Code n_samples=1000000 ###Output _____no_output_____ ###Markdown Reading from a CSV ###Code with open('fdata.txt', 'w') as fdata: for _ in range(n_samples): fdata.write(str(10*np.random.random())+',') t1=time.time() array_direct = np.fromfile('fdata.txt',dtype=float, count=-1,sep=',').reshape(1000,1000) t2=time.time() print(array_direct) print('\nShape: ',array_direct.shape) print(f"Time took to read: {t2-t1} seconds.") t1=time.time() with open('fdata.txt','r') as fdata: datastr=fdata.read() lst = datastr.split(',') lst.pop() array_lst=np.array(lst,dtype=float).reshape(1000,1000) t2=time.time() print(array_lst) print('\nShape: ',array_lst.shape) print(f"Time took to read: {t2-t1} seconds.") ###Output [[0.19518972 6.39889099 8.86179077 ... 1.78274828 4.14988379 7.45517339] [0.28846542 6.64861961 0.24638384 ... 1.30792505 1.0367481 4.74891783] [8.40814537 5.05111512 8.63130373 ... 6.54190772 6.38385259 2.75347884] ... [5.05132432 5.93558368 9.07536994 ... 3.65136772 6.29371461 3.87511569] [7.44320641 3.3288574 0.27708193 ... 2.10070278 6.34682483 9.01881409] [1.52621504 9.35146825 1.50775586 ... 2.60806502 5.0169436 4.3254457 ]] Shape: (1000, 1000) Time took to read: 0.5665757656097412 seconds. ###Markdown Save as a .npy file and read ###Code np.save('fnumpy.npy',array_lst) t1=time.time() array_reloaded = np.load('fnumpy.npy') t2=time.time() print(array_reloaded) print('\nShape: ',array_reloaded.shape) print(f"Time took to load: {t2-t1} seconds.") t1=time.time() array_reloaded = np.load('fnumpy.npy').reshape(10000,100) t2=time.time() print(array_reloaded) print('\nShape: ',array_reloaded.shape) print(f"Time took to load: {t2-t1} seconds.") ###Output [[0.19518972 6.39889099 8.86179077 ... 9.63106665 1.85069765 4.25534529] [8.14357747 6.56347765 8.02998847 ... 0.93040803 6.67272555 6.91109852] [1.51447298 4.95842653 5.70978086 ... 7.32853786 4.5788133 3.44281139] ... [2.35366938 9.77452941 6.17511558 ... 9.61026204 2.85523678 1.55581879] [2.97670426 8.5953027 8.24646196 ... 0.62146363 5.68336225 8.85861478] [1.29283608 0.21021618 4.93077805 ... 2.60806502 5.0169436 4.3254457 ]] Shape: (10000, 100) Time took to load: 0.006982326507568359 seconds. ###Markdown Speed enhancement as the sample size grows... ###Code n_samples=[100000*i for i in range(1,11)] time_lst_read=[] time_npy_read=[] for sample_size in n_samples: with open('fdata.txt', 'w') as fdata: for _ in range(sample_size): fdata.write(str(10*np.random.random())+',') t1=time.time() with open('fdata.txt','r') as fdata: datastr=fdata.read() lst = datastr.split(',') lst.pop() array_lst=np.array(lst,dtype=float) t2=time.time() time_lst_read.append(1000*(t2-t1)) print("Array shape:",array_lst.shape) np.save('fnumpy.npy',array_lst) t1=time.time() array_reloaded = np.load('fnumpy.npy') t2=time.time() time_npy_read.append(1000*(t2-t1)) print("Array shape:",array_reloaded.shape) print(f"Processing done for {sample_size} samples\n") import matplotlib.pyplot as plt plt.figure(figsize=(8,5)) #plt.xscale('log') #plt.yscale('log') plt.scatter(n_samples,time_lst_read) plt.scatter(n_samples,time_npy_read) plt.legend(['Normal read from CSV','Read from .npy file']) plt.show() ###Output _____no_output_____ ###Markdown Speed difference between reading numerical data from plain CSV vs. using _.npy_ file format ###Code import numpy as np import time n_samples=1000000 with open('fdata.txt', 'w') as fdata: for _ in range(n_samples): fdata.write(str(10*np.random.random())+',') t1=time.time() array_direct = np.fromfile('fdata.txt',dtype=float, count=-1,sep=',').reshape(1000,1000) t2=time.time() print(array_direct) print('\nShape: ',array_direct.shape) print(f"Time took to read: {t2-t1} seconds.") t1=time.time() with open('fdata.txt','r') as fdata: datastr=fdata.read() lst = datastr.split(',') lst.pop() array_lst=np.array(lst,dtype=float).reshape(1000,1000) t2=time.time() print(array_lst) print('\nShape: ',array_lst.shape) print(f"Time took to read: {t2-t1} seconds.") np.save('fnumpy.npy',array_lst) t1=time.time() array_reloaded = np.load('fnumpy.npy') t2=time.time() print(array_reloaded) print('\nShape: ',array_reloaded.shape) print(f"Time took to load: {t2-t1} seconds.") t1=time.time() array_reloaded = np.load('fnumpy.npy').reshape(10000,100) t2=time.time() print(array_reloaded) print('\nShape: ',array_reloaded.shape) print(f"Time took to load: {t2-t1} seconds.") ###Output [[0.32614787 6.84798256 2.59321025 ... 3.01180325 2.39479796 0.72345778] [3.69505384 4.53401889 8.36879084 ... 9.9009631 7.33501957 2.50186053] [4.35664074 4.07578682 1.71320519 ... 8.33236349 7.2902005 5.27535724] ... [1.11051629 5.43382324 3.86440843 ... 4.38217095 0.23810232 1.27995629] [2.56255361 7.8052843 6.67015391 ... 3.02916997 4.76569949 0.95855667] [6.06043577 5.8964256 4.57181929 ... 5.6204355 4.47407948 9.50940101]] Shape: (10000, 100) Time took to load: 0.010006189346313477 seconds. ###Markdown Speed enhancement as the sample size grows... ###Code n_samples=[100000*i for i in range(1,11)] time_lst_read=[] time_npy_read=[] for sample_size in n_samples: with open('fdata.txt', 'w') as fdata: for _ in range(sample_size): fdata.write(str(10*np.random.random())+',') t1=time.time() with open('fdata.txt','r') as fdata: datastr=fdata.read() lst = datastr.split(',') lst.pop() array_lst=np.array(lst,dtype=float) t2=time.time() time_lst_read.append(1000*(t2-t1)) print("Array shape:",array_lst.shape) np.save('fnumpy.npy',array_lst) t1=time.time() array_reloaded = np.load('fnumpy.npy') t2=time.time() time_npy_read.append(1000*(t2-t1)) print("Array shape:",array_reloaded.shape) print(f"Processing done for {sample_size} samples\n") import matplotlib.pyplot as plt plt.figure(figsize=(8,5)) #plt.xscale('log') #plt.yscale('log') plt.scatter(n_samples,time_lst_read) plt.scatter(n_samples,time_npy_read) plt.legend(['Normal read from CSV','Read from .npy file']) plt.show() time_npy_read ###Output _____no_output_____
project3/.Trash-0/files/project_3_solution.ipynb
###Markdown Project 3: Smart Beta Portfolio and Portfolio Optimization InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it Udacity. PackagesWhen you implement the functions, you'll only need to use the [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/) packages. Don't import any other packages, otherwise the grader willn't be able to run your code.The other packages that we're importing is `helper` and `project_tests`. These are custom packages built to help you solve the problems. The `helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages ###Code import sys !{sys.executable} -m pip install -r requirements.txt ###Output _____no_output_____ ###Markdown Load Packages ###Code import pandas as pd import numpy as np import helper import project_tests ###Output _____no_output_____ ###Markdown Market DataThe data source we'll be using is the [Wiki End of Day data](https://www.quandl.com/databases/WIKIP) hosted at [Quandl](https://www.quandl.com). This contains data for many stocks, but we'll just be looking at the S&P 500 stocks. We'll also make things a little easier to solve by narrowing our range of time from 2007-06-30 to 2017-09-30. Set API KeySet the `quandl.ApiConfig.api_key ` variable to your Quandl api key. You can find your Quandl api key [here](https://www.quandl.com/account/api). ###Code import quandl # TODO: Add your Quandl API Key quandl.ApiConfig.api_key = '' ###Output _____no_output_____ ###Markdown Download Data ###Code import os snp500_file_path = 'data/tickers_SnP500.txt' wiki_file_path = 'data/WIKI_PRICES.csv' start_date, end_date = '2013-07-01', '2017-06-30' use_columns = ['date', 'ticker', 'adj_close', 'adj_volume', 'ex-dividend'] if not os.path.exists(wiki_file_path): with open(snp500_file_path) as f: tickers = f.read().split() print('Downloading data...') helper.download_quandl_dataset('WIKI', 'PRICES', wiki_file_path, use_columns, tickers, start_date, end_date) print('Data downloaded') else: print('Data already downloaded') ###Output _____no_output_____ ###Markdown Load Data ###Code df = pd.read_csv(wiki_file_path, index_col=['ticker', 'date']) ###Output _____no_output_____ ###Markdown Create the UniverseWe'll be selecting large cap stocks for our stock universe. These stocks are the most liquid and allow to use more buying power with less slippage. ###Code # TODO(Brok): Use large cap stock data percent_top_dollar = 0.2 high_volume_symbols = helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar) df = df.loc[high_volume_symbols, :] ###Output _____no_output_____ ###Markdown 2-D MatricesIn the previous projects, we used a [multiindex](https://pandas.pydata.org/pandas-docs/stable/advanced.html) to store all the data in a single dataframe. As you work with larger datasets, it come infeasable to store all the data in memory. Starting with this project, we'll be storing all our data as 2-D matrices to match what you'll be expecting in the real world. ###Code close = df.reset_index().pivot(index='ticker', columns='date', values='adj_close') volume = df.reset_index().pivot(index='ticker', columns='date', values='adj_volume') ex_dividend = df.reset_index().pivot(index='ticker', columns='date', values='ex-dividend') ###Output _____no_output_____ ###Markdown View DataTo see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix. ###Code helper.print_dataframe(close) ###Output _____no_output_____ ###Markdown Part 1: Smart Beta PortfolioIn Part 1 of this project, you'll build a smart beta portfolio using dividend yied. To see how well it performs, you'll compare this portfolio to an index. Index WeightsAfter building the smart beta portfolio, should compare it to a similar strategy or index. For this project, we'll use a simple index with one of the most common factor of market capitalization.Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on market cap for that date. For example, assume the following is market cap data:| | 10/02/2010 | 10/03/2010 ||----------|------------|------------|| **AAPL** | 2 | 2 || **BBC** | 5 | 6 || **GGL** | 1 | 2 || **ZGB** | 6 | 5 |The weights should be the following:| | 10/02/2010 | 10/03/2010 ||----------|------------|------------|| **AAPL** | 0.142 | 0.133 || **BBC** | 0.357 | 0.400 || **GGL** | 0.071 | 0.133 || **ZGB** | 0.428 | 0.333 | ###Code # TODO(Brok): Generate weights from market cap # TODO(Brok): Change function name def generate_dollar_volume_weights(close, volume): """ Generate dollar volume weights. Parameters ---------- close : DataFrame Close price for each ticker and date volume : str Volume for each ticker and date Returns ------- dollar_volume_weights : DataFrame The dollar volume weights for each ticker and date """ assert close.index.equals(volume.index) assert close.columns.equals(volume.columns) #TODO: Implement function dollar_volume = close * volume return dollar_volume / dollar_volume.sum() project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights) ###Output _____no_output_____ ###Markdown View DataLet's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap. ###Code index_weights = generate_dollar_volume_weights(close, volume) helper.plot_weights(index_weights, 'Index Weights') ###Output _____no_output_____ ###Markdown ETF WeightsNow that we have the index weights, it's time to build the weights for the smart beta ETF. Let's build an ETF portfolio that is based on dividends. This is a common factor used to build portfolios. Unlike most portfolios, we'll be using a single factor for simplicity.Implement `calculate_dividend_weights` to returns the weights for each stock based on it's total dividend yield over time. This is similar to generating the weight for the index, but it's dividend data instead of market cap. ###Code def calculate_dividend_weights(ex_dividend): """ Calculate dividend weights. Parameters ---------- ex_dividend : DataFrame Ex-dividend for each stock and date Returns ------- dividend_weights : DataFrame Weights for each stock and date """ #TODO: Implement function dividend_cumsum_per_ticker = ex_dividend.T.cumsum().T return dividend_cumsum_per_ticker/dividend_cumsum_per_ticker.sum() project_tests.test_calculate_dividend_weights(calculate_dividend_weights) ###Output _____no_output_____ ###Markdown View DataLet's generate the ETF weights using `calculate_dividend_weights` and view them using a heatmap. ###Code etf_weights = calculate_dividend_weights(ex_dividend) helper.plot_weights(etf_weights, 'ETF Weights') ###Output _____no_output_____ ###Markdown ReturnsImplement `generate_returns` to generate the returns. Note this isn't log returns. Since we're not dealing with volatility, we don't have to use log returns. ###Code def generate_returns(close): """ Generate returns for ticker and date. Parameters ---------- close : DataFrame Close price for each ticker and date Returns ------- returns : Dataframe The returns for each ticker and date """ #TODO: Implement function return (close.T / close.T.shift(1) -1).T project_tests.test_generate_returns(generate_returns) ###Output _____no_output_____ ###Markdown View DataLet's generate the closing returns using `generate_returns` and view them using a heatmap. ###Code returns = generate_returns(close) helper.plot_returns(returns, 'Close Returns') ###Output _____no_output_____ ###Markdown Weighted ReturnsWith the returns of each stock computed, we can use it to compute the returns for for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using returns and weights for an Index or ETF. ###Code def generate_weighted_returns(returns, weights): """ Generate weighted returns. Parameters ---------- returns : DataFrame Returns for each ticker and date weights : DataFrame Weights for each ticker and date Returns ------- weighted_returns : DataFrame Weighted returns for each ticker and date """ assert returns.index.equals(weights.index) assert returns.columns.equals(weights.columns) #TODO: Implement function return returns * weights project_tests.test_generate_weighted_returns(generate_weighted_returns) ###Output _____no_output_____ ###Markdown View DataLet's generate the etf and index returns using `generate_weighted_returns` and view them using a heatmap. ###Code index_weighted_returns = generate_weighted_returns(returns, index_weights) etf_weighted_returns = generate_weighted_returns(returns, etf_weights) helper.plot_returns(index_weighted_returns, 'Index Returns') helper.plot_returns(etf_weighted_returns, 'ETF Returns') ###Output _____no_output_____ ###Markdown Cumulative ReturnsImplement `calculate_cumulative_returns` to calculate the cumulative returns over time. ###Code def calculate_cumulative_returns(returns): """ Calculate cumulative returns. Parameters ---------- returns : DataFrame Returns for each ticker and date Returns ------- cumulative_returns : Pandas Series Cumulative returns for each date """ #TODO: Implement function return (pd.Series([0]).append(returns.sum()) + 1).cumprod().iloc[1:] project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns) ###Output _____no_output_____ ###Markdown View DataLet's generate the etf and index cumulative returns using `calculate_cumulative_returns` and compare the two. ###Code index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns) etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns) helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index') ###Output _____no_output_____ ###Markdown Tracking ErrorIn order to check the performance of the smart beta protfolio, we can compare it against the index. Implement `tracking_error` to return the tracking error between the etf and index over time. ###Code def tracking_error(index_weighted_cumulative_returns, etf_weighted_cumulative_returns): """ Calculate the tracking error. Parameters ---------- index_weighted_cumulative_returns : Pandas Series The weighted index Cumulative returns for each date etf_weighted_cumulative_returns : Pandas Series The weighted etf Cumulative returns for each date Returns ------- tracking_error : Pandas Series The tracking error for each date """ assert index_weighted_cumulative_returns.index.equals(etf_weighted_cumulative_returns.index) #TODO: Implement function tracking_error = index_weighted_cumulative_returns - etf_weighted_cumulative_returns return tracking_error project_tests.test_tracking_error(tracking_error) ###Output _____no_output_____ ###Markdown View DataLet's generate the tracking error using `tracking_error` and graph it over time. ###Code smart_beta_tracking_error = tracking_error(index_weighted_cumulative_returns, etf_weighted_cumulative_returns) helper.plot_tracking_error(smart_beta_tracking_error, 'Smart Beta Tracking Error') ###Output _____no_output_____ ###Markdown Part 2: Portfolio OptimizationIn Part 2, you'll optimize the index you created in part 1. You'll use `cvxpy` to optimize the convex problem of finding the optimal weights for the portfolio. Just like before, we'll compare these results to the index. CovarianceImplement `get_covariance` to calculate the covariance of `returns` and `weighted_index_returns`. We'll use this to feed into our convex optimization function. By using covariance, we can prevent the optimizer from going all in on a few stocks. ###Code def get_covariance(returns, weighted_index_returns): """ Calculate covariance matrices. Parameters ---------- returns : DataFrame Returns for each ticker and date weighted_index_returns : DataFrame Weighted index returns for each ticker and date Returns ------- xtx, xty : (2 dimensional Ndarray, 1 dimensional Ndarray) """ assert returns.index.equals(weighted_index_returns.index) assert returns.columns.equals(weighted_index_returns.columns) #TODO: Implement function returns = returns.fillna(0) weighted_index_returns = weighted_index_returns.sum().fillna(0) xtx = returns.dot(returns.T) xty = returns.dot(np.matrix(weighted_index_returns).T)[0] return xtx.values, xty.values project_tests.test_get_covariance(get_covariance) ###Output _____no_output_____ ###Markdown View Data???? ###Code xtx, xty = get_covariance(returns, index_weighted_returns) xtx = pd.DataFrame(xtx, returns.index, returns.index) xty = pd.Series(xty, returns.index) helper.plot_covariance(xty, xtx) ###Output _____no_output_____ ###Markdown Quadratic ProgrammingNow that you have the covariance, we can use this to optomize the weights. Implement `solve_qp` to return the optimal `x` in the convex function with the following constrains:- Sum of all x is 1- x >= 0 ###Code # TODO(Brok): Use cvxpy import cvxopt def solve_qp(P, q): """ Find the solution for minimize 0.5P*x*x - q*x with the following constraints: - sum of all x equals to 1 - All x are greater than or equal to 0 Parameters ---------- P : 2 dimensional Ndarray q : 1 dimensional Ndarray Returns ------- x : 1 dimensional Ndarray The solution for x """ assert len(P.shape) == 2 assert len(q.shape) == 1 assert P.shape[0] == P.shape[1] == q.shape[0] #TODO: Implement function nn = len(q) g = cvxopt.spmatrix(-1, range(nn), range(nn)) a = cvxopt.matrix(np.ones(nn), (1,nn)) b = cvxopt.matrix(1.0) h = cvxopt.matrix(np.zeros(nn)) P = cvxopt.matrix(P) q = -cvxopt.matrix(q) # Min cov # Max return cvxopt.solvers.options['show_progress'] = False sol = cvxopt.solvers.qp(P, q, g, h, a, b) if 'optimal' not in sol['status']: return np.array([]) return np.array(sol['x']).flatten() project_tests.test_solve_qp(solve_qp) ###Output _____no_output_____ ###Markdown View Data???? ###Code raw_optim_etf_weights = solve_qp(xtx.values, xty.values) raw_optim_etf_weights_per_date = np.tile(raw_optim_etf_weights, (len(returns.columns), 1)) optim_etf_weights = pd.DataFrame(raw_optim_etf_weights_per_date.T, returns.index, returns.columns) ###Output _____no_output_____ ###Markdown Optimized PortfolioWith our optimized etf weights built using quadradic programming, let's compare it to the index. Run the next cell to calculate the optimized etf returns and compare the returns to the index returns. ###Code optim_etf_returns = generate_weighted_returns(returns, optim_etf_weights) optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns) helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index') optim_etf_tracking_error = tracking_error(index_weighted_cumulative_returns, optim_etf_cumulative_returns) helper.plot_tracking_error(optim_etf_tracking_error, 'Optimized ETF Tracking Error') ###Output _____no_output_____ ###Markdown Rebalance PortfolioThe optimized etf porfolio used different weights for each day. After calculating in transation fees, this amount of turnover to the portfolio can reduce the total returns. Let's find the optimal times to rebalancve the portfolio instead of doing it every day.Implement `rebalance_portfolio` to rebalance a portfolio. ###Code def rebalance_portfolio(returns, weighted_index_returns, shift_size, chunk_size): """ Get weights for each rebalancing of the portfolio. Parameters ---------- returns : DataFrame Returns for each ticker and date weighted_index_returns : DataFrame Weighted index returns for each ticker and date shift_size : int The number of days between each rebalance chunk_size : int The number of days to look in the past for rebalancing Returns ------- all_rebalance_weights : list of Ndarrays The etf weights for each point they are rebalanced """ assert returns.index.equals(weighted_index_returns.index) assert returns.columns.equals(weighted_index_returns.columns) assert shift_size > 0 assert chunk_size >= 0 #TODO: Implement function date_len = returns.shape[1] all_rebalance_weights = [] for shift in range(chunk_size, date_len, shift_size): start_idx = shift - chunk_size xtx, xty = get_covariance(returns.iloc[:, start_idx:shift], weighted_index_returns.iloc[:, start_idx:shift]) all_rebalance_weights.append(solve_qp(xtx, xty)) return all_rebalance_weights project_tests.test_rebalance_portfolio(rebalance_portfolio) ###Output _____no_output_____ ###Markdown View Data???? ###Code chunk_size = 250 shift_size = 5 all_rebalance_weights = rebalance_portfolio(returns, index_weighted_returns, shift_size, chunk_size) ###Output _____no_output_____ ###Markdown Portfolio Rebalance CostWith the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Imeplement `get_rebalance_cost` to calculate the rebalance cost. ###Code def get_rebalance_cost(all_rebalance_weights, shift_size, rebalance_count): """ Get the cost of all the rebalancing. Parameters ---------- all_rebalance_weights : list of Ndarrays ETF Returns for each ticker and date shift_size : int The number of days between each rebalance rebalance_count : int Number of times the portfolio was rebalanced Returns ------- rebalancing_cost : float The cost of all the rebalancing """ assert shift_size > 0 assert rebalance_count > 0 #TODO: Implement function all_rebalance_weights_df = pd.DataFrame(np.array(all_rebalance_weights)) rebalance_total = (all_rebalance_weights_df - all_rebalance_weights_df.shift(-1)).abs().sum().sum() return (shift_size / rebalance_count) * rebalance_total project_tests.test_get_rebalance_cost(get_rebalance_cost) ###Output _____no_output_____ ###Markdown View Data???? ###Code unconstrained_costs = get_rebalance_cost(all_rebalance_weights, shift_size, returns.shape[1]) ###Output _____no_output_____ ###Markdown Potfolio Optimization Results???? ###Code # TODO(Brok): Add plot # IGNORE THIS CODE # THIS CODE IS TEST CODE FOR BUILDING PROJECT # THIS WILL BE REMOVED BEFORE FINAL PROJECT # Error checking while refactoring assert np.isclose(optim_etf_weights, np.load('check_data/po_weights.npy'), equal_nan=True).all() assert np.isclose(optim_etf_tracking_error, np.load('check_data/po_tracking_error.npy'), equal_nan=True).all() assert np.isclose(smart_beta_tracking_error, np.load('check_data/sb_tracking_error.npy'), equal_nan=True).all() # Error checking while refactoring assert np.isclose(unconstrained_costs, 0.10739965758876144), unconstrained_costs ###Output _____no_output_____
docs/tutorials/datasets/fetch_fimi_datasets.ipynb
###Markdown fetch [FIMI](http://fimi.uantwerpen.be/data/) datasets`FIMI` is a popular repository referencing standard datasets and algorithms in pattern mining load datasets separately ###Code from skmine.datasets.fimi import fetch_chess from skmine.datasets.fimi import fetch_accidents from skmine.datasets.fimi import fetch_kosarak chess = fetch_chess() chess.str[:10].head() # .str accessor allows horizontal slicing accidents = fetch_accidents() accidents.str[:10].head() ###Output _____no_output_____ ###Markdown Describe those datasets ###Code from skmine.datasets.utils import describe describe(chess) ###Output _____no_output_____
Mathematical Statistics/Regression Analysis.ipynb
###Markdown 回归分析 Regression AnalysisxyfJASON在「插值与拟合」中我们已经了解了最小二乘法作曲线拟合,但是从数理统计的观点看,数据点的观测具有误差,可以视为**随机变量**,我们有必要对结果作区间估计或假设检验,以评估结果的可信度和模型的优劣。**简单地说,回归分析就是对拟合问题作统计分析。**回归分析会研究以下几个问题:1. 建立因变量 $y$ 和自变量 $x_1,x_2,\ldots,x_m$ 之间的回归模型;2. 检验回归模型的可信度(拟合效果);3. 检验每个自变量 $x_i$ 对 $y$ 的影响是否显著;4. 判断回归模型是否适合样本数据;5. 使用回归模型 1 一元线性回归 1.1 模型一元线性回归的模型为:$$y=\beta_0+\beta_1x+\epsilon$$其中 $\beta_0,\beta_1$ 是回归系数,$\epsilon\sim N(0, \sigma^2)$ 是随机误差项,故随机变量 $y\sim N(\beta_0+\beta_1x, \sigma^2)$。设我们进行了 $n$ 次观测,得到样本 $\{x_i,y_i\},\,i=1,2,\ldots,n$,它们符合模型:$y_i=\beta_0+\beta_1x_i+\epsilon_i$,且 $\epsilon_i$ 之间相互独立。 1.2 最小二乘估计 1.2.1 最小二乘法最小二乘估计即取 $\beta_0,\beta_1$ 的一组估计值 $\hat\beta_0,\hat\beta_1$,使得误差平方和最小:$$\hat\beta_0,\hat\beta_1=\arg\min_{\beta_0,\beta_1}\sum_{i=1}^n\left(y_i-\beta_0-\beta_1x_i\right)^2$$令偏导为零,解方程可得:$$\hat\beta_1=\frac{\sum\limits_{i=1}^n(x_i-\bar x)(y_i-\bar y)}{\sum\limits_{i=1}^n(x_i-\bar x)^2},\quad\hat\beta_0=\bar y-\hat\beta_1\bar x$$其中 $\bar x=\frac{1}{n}\sum\limits_{i=1}^nx_i,\,\bar y=\frac{1}{n}\sum\limits_{i=1}^ny_i$。也可以改写为:$$\hat\beta_1=\frac{s_y}{s_x}r_{xy}$$其中 $s_x^2=\frac{1}{n-1}\sum\limits_{i=1}^n(x_i-\bar x)^2,\,s_y^2=\frac{1}{n-1}\sum\limits_{i=1}^n(y_i-\bar y)^2$ 是样本方差,$r_{xy}=\cfrac{\sum\limits_{i=1}^n(x_i-\bar x)(y_i-\bar y)}{\sqrt{\sum\limits_{i=1}^n(x_i-\bar x)^2}\sqrt{\sum\limits_{i=1}^n(y_i-\bar y)^2}}$ 是 $x$ 与 $y$ 的样本相关系数。特别地,当 $x,y$ 均已标准化时,$\bar x=\bar y=0,\,s_x=s_y=1$,于是回归方程为:$$\hat y=r_{xy}x$$ 1.2.2 $\hat\beta_1$ 的性质注意 $\hat\beta_1$ 是一个随机变量,它具有以下性质:1. $\hat\beta_1$ 可以写作 $y_i$ 的线性组合,即 $\hat\beta_1=\sum\limits_{i=1}^nk_iy_i$,其中 $k_i=\cfrac{x_i-\bar x}{\sum\limits_{j=1}^n(x_j-\bar x)^2}$;2. 由于 $y_i$ 是相互独立的正态随机变量,所以 $\hat\beta_1$ 也是正态随机变量;3. 点估计量 $\hat\beta_1$ 是真值 $\beta_1$ 的无偏估计,即 $\mathbb E[\hat \beta_1]=\beta_1$;4. 点估计量 $\hat\beta_1$ 的方差为:$\text{var}(\hat\beta_1)=\cfrac{\sigma^2}{\sum\limits_{i=1}^n(x_i-\bar x)^2}$ 1.2.3 其他性质最小二乘还具有一些值得注意的性质:1. 残差和为零:$\sum\limits_{i=1}^ne_i=\sum\limits_{i=1}^n(y_i-\hat y_i)=0$2. 拟合值 $\hat y_i$ 的平均值等于观测值 $y_i$ 的平均值:$\frac{1}{n}\sum\limits_{i=1}^n\hat y_i=\frac{1}{n}\sum\limits_{i=1}^n y_i=\bar y$3. $\sum\limits_{i=1}^nx_ie_i=0$4. $\sum\limits_{i=1}^n\hat y_ie_i=0$5. 回归直线总是过 $(\bar x, \bar y)$ 1.3 拟合效果分析 1.3.1 残差的样本方差残差:$e_i=y_i-\hat y_i$,其样本均值为:$\frac{1}{n}\sum\limits_{i=1}^n(y_i-\hat y_i)=0$,其样本方差(也即均方误差)为:$$\text{MSE}=\frac{1}{n-2}\sum_{i=1}^ne_i^2=\frac{1}{n-2}\sum_{i=1}^n(y_i-\hat y_i)^2$$(由于有两个约束:$\sum\limits_{i=1}^ne_i=0,\,\sum\limits_{i=1}^nx_ie_i=0$,所以自由度为 $n-2$)$\text{MSE}$ 是总体方差 $\sigma^2=\text{var}(\epsilon_i)$ 的无偏估计量。 1.3.2 判定系数不同的 $x_i$ 对应不同的 $y_i$,建立一元线性回归模型,就是试图用 $x$ 的线性函数解释 $y$ 的变异。因此我们需要判定回归模型 $\hat y=\hat\beta_1x+\hat \beta_0$ 究竟能以多大精度解释 $y$ 的变异。$y$ 的变异可以由样本方差刻画:$$s^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\bar y)^2$$根据前述性质,拟合值 $\hat y_i$ 的均值也是 $\bar y$,故其变异程度可以类似地刻画:$$\hat s^2=\frac{1}{n-1}\sum_{i=1}^n(\hat y_i-\bar y)^2$$上述二者的关系是:$$\sum_{i=1}^n(y_i-\bar y)^2=\sum_{i=1}^n(\hat y_i-\bar y)^2+\sum_{i=1}^n(y_i-\hat y_i)^2$$上式中第一项 $\text{SST}=\sum\limits_{i=1}^n(y_i-\bar y)^2$ 是原始数据的变异程度;第二项 $\text{SSR}=\sum\limits_{i=1}^n(\hat y_i-\bar y)^2$ 是拟合数据的变异程度;第三项 $\text{SSE}=\sum\limits_{i=1}^n(y_i-\hat y_i)^2$ 是残差平方和。对于一个确定的样本,$\text{SST}$ 是固定的,$\text{SSR}$ 越大,说明回归方程能越好地解释原数据的变异;同时 $\text{SSE}$ 越小,说明回归方程对原数据拟合得越好。定义**判定系数**(coefficient of determination):$$R^2=\frac{\text{SSR}}{\text{SST}}=1-\frac{\text{SSE}}{\text{SST}}$$可以知道 $R^2\in[0,1]$,且其数值越大,表明拟合得越好;当 $R^2=1$ 时,拟合点与原数据完全吻合;若预测函数为常值函数(即始终预测 $\bar y$),那么 $R^2=0$,因为它完全不能解释原数据的变异。可以证明,$\sqrt{R^2}$ 等于 $x,y$ 的相关系数绝对值,其符号与 $\hat\beta_1$ 相同。 1.4 显著性检验现在我们给出了 $x$ 和 $y$ 的线性关系,但是这基于一个假设:$x$ 的变化确实会影响到 $y$ 的变化,这个假设是否真实还需要检验。如果 $x$ 对 $y$ 没有显著影响,相当于假设:$H_0:\beta_1=0$,则在该假设下有:$$F=\frac{\text{SSR}/1}{\text{SSE}/(n-2)}\sim F(1, n-2)$$于是我们可以对该统计量做 F 检验。 2 多元线性回归 2.1 模型多元回归分析的模型为:$$y=\beta_0+\beta_1x_1+\cdots+\beta_mx_m+\epsilon$$其中,$\beta_0,\ldots, \beta_m$ 是回归系数,$\epsilon\sim N(0, \sigma^2)$ 是随机误差项。设我们进行了 $n$ 次观测,得到数据:$(y_i, x_{i1},\ldots,x_{im}),\,i=1,2,\ldots,n$,则:$$y_i=\beta_0+\beta_1x_{i1}+\cdots+\beta_mx_{im}+\epsilon_i,\,i=1,2,\ldots,n$$若令:$$Y=\begin{bmatrix}y_1\\\vdots\\y_n\end{bmatrix},\,X=\begin{bmatrix}1&x_{11}&\cdots&x_{1m}\\\vdots&\vdots&\ddots&\vdots\\1&x_{n1}&\cdots&x_{nm}\end{bmatrix},\,\epsilon=\begin{bmatrix}\epsilon_1\\\vdots\\\epsilon_n\end{bmatrix},\,\beta=\begin{bmatrix}\beta_0\\\beta_1\\\vdots\\\beta_m\end{bmatrix}$$那么上式可以表示为:$$Y=X\beta+\epsilon$$ 2.2 最小二乘估计仍然使用最小二乘法,使得误差平方和最小:$$\hat\beta=\arg\min_\beta\sum_{i=1}^n(y_i-\beta\cdot x_i)^2$$可以解得(若 $X$ 满秩):$$\hat\beta=(X^TX)^{-1}X^TY$$于是数据的拟合值为 $\hat Y=X\hat\beta$,残差为 $e=Y-\hat Y$,残差平方和为 $\text{SSE}=\sum\limits_{i=1}^ne_i^2=\sum\limits_{i=1}^n(y_i-\hat y_i)^2$ 2.3 统计分析1. $\hat\beta$ 是 $\beta$ 的线性无偏最小方差估计;2. $\beta\sim N(\beta, \sigma^2(X^TX)^{-1})$3. $\mathbb E[\text{SSE}]=(n-m-1)\sigma^2$,$\frac{\text{SSE}}{\sigma^2}\sim \chi^2(n-m-1)$,由此得到 $\sigma^2$ 的无偏估计:$s^2=\frac{\text{SSE}}{n-m-1}=\hat\sigma^2$,称 $s^2$ 为剩余方差,$s$ 为剩余标准差;4. $\text{SST}=\text{SSE}+\text{SSR}$,其中 $\text{SSR}=\sum\limits_{i=1}^n(\hat y-\bar y)^2$;5. 判定系数:$R^2=\frac{\text{SSR}}{\text{SST}}=1-\frac{\text{SSE}}{\text{SST}}$ 2.4 假设检验令原假设 $H_0:\beta_1=\cdots=\beta_m=0$,那么假设成立时有:$$F=\frac{\text{SSR}/m}{\text{SSE}/(n-m-1)}\sim F(m, n-m-1)$$于是我们可以对该统计量做 F 检验。但是当上述 $H_0$ 被拒绝时,只能说明 $\beta_j$ 不全为零,不能排除其中若干个为零。如果要对每一个 $\beta_j$ 进行判断,应该作下述 $m+1$ 个 $t$ 检验:设原假设 $H_0^{(j)}:\beta_j=0$,则当 $H_0^{(j)}$ 成立时,有:$$t_j=\frac{\hat\beta_j/\sqrt{c_{jj}}}{\sqrt{\text{SSE}/(n-m-1)}}\sim t(n-m-1)$$其中,$c_{jj}$ 是 $(X^TX)^{-1}$ 中第 $(j, j)$ 元素。 3 变量筛选待续 4 代码`sklearn.linear_model.LinearRegression` 提供了线性回归的类,支持拟合数据、预测数据、输出 $R^2$ 判定系数。Documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html为了功能的完整性,我在同文件夹下 `statistics.py` 模块中封装了 `RegressionAnalysis` 类,在 `LinearRegression` 基础上增加了 F 检验、t 检验的功能。 5 例题 5.1 例一| $x_1$ | 120 | 140 | 190 | 130 | 155 | 175 | 125 | 145 | 180 | 150 || ----- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- || $x_2$ | 100 | 110 | 90 | 150 | 210 | 150 | 250 | 270 | 300 | 250 || $y$ | 102 | 100 | 120 | 77 | 46 | 93 | 26 | 69 | 65 | 85 | ###Code import numpy as np x1 = np.array([120, 140, 190, 130, 155, 175, 125, 145, 180, 150]) x2 = np.array([100, 110, 90, 150, 210, 150, 250, 270, 300, 250]) y = np.array([102, 100, 120, 77, 46, 93, 26, 69, 65, 85]) from statistics import RegressionAnalysis x = np.hstack((x1.reshape(-1, 1), x2.reshape(-1, 1))) reg = RegressionAnalysis(x, y) print('coef:\t', reg.coef) print('R2:\t', reg.R2) print('MSE:\t', reg.MSE) print('f_test:\t', reg.f_test()) print('t_test:\t', reg.t_test()) ###Output coef: [66.51756832 0.41391526 -0.2697807 ] R2: 0.6527308195492921 MSE: 351.0444925410363 f_test: 0.024679243751388857 t_test: [0.07810956 0.0779739 0.9937435 ]
introduction/intro_to_conda_and_jupyter.ipynb
###Markdown Welcome to the first Musical Informatics code-along notebook!In this jupyter notebook we take a look at:- setting up a conda environment with jupyter and partitura- finding your way around jupyter notebooksstate October 2021 Setting up a conda environment (this is the same information as the README file of the introduction repository.)1. Go to anaconda.com and download the miniconda (or anaconda, if you prefer) in version compatible with the system your running. You can choose any python version you like, for this course we use 3.7. 2. Install this software which gives you at least two things: the command-line python package manager ```conda``` (similar to the python package manager ```pip```) and the conda environment management system. Conda environments allow you to create virtual python environments with their own python interpreter and installed packages.3. In this repository you find a file called ```environment.yml``` which contains a list of dependencies that we use regularly.4. Create a conda environment with __your_environment_name__ from this file by executing```conda env -n your_environment_name -f path/to/environmnent.yml```in the terminal of your choice. Conda supports many shells, though some (PowerShell, ...) might require extra steps after installation.5. You can deactivate the currently active environment by executing```conda deactivate```6. You can activate your environment by executing```conda activate your_environment_name```7. In many terminals you see the name of the currently active environment in parnetheses in the beginning of the line. Removing or installing anything via pip or conda will only affect this environment while it is activated. Check your currently installed packages via ```pip freeze``` or ```conda list```. Check the python version using ```python -V``` So how do jupyter notebooks work?Jupyter notebooks are a handy way to mix code with visualizations and explanations. Unfortunately, they can have the side effect of creating bad coding habits. Let's have a look athow they work and what we need to keep in mind while using them. This is a rough overview and only looks at the functionality we will definitely need1. Jupyter notebooks consist of multiple cells which contain markdown/latex text or python code2. Whether a cell is evaluated as text or code depends on its type: change the type by clicking on "Cell" in the menu bar and then "Cell Type"3. All cells up until now have been markdown text cells. Double click somewhere to see the "source text"4. A cell is evaluated by clicking "Run" in the navigation bar of by typing ```Shift + Enter```5. A text cell is evaluated to rendered text, a python code cell is interpreted, all text based or graphic feedback will be displayed just below the cell6. Jupyter Notebooks are *not* run top to bottom by default, but in the order in which you evaluate the cells. It is best practice to force yourself to work top to bottom however and make sure you don't overwrite your variables by jumping back and forth.7. Python cells work very similar to python scripts. You can import packages and files, you can use previously computed variables or previously defined functions. You can plot using matplotlib by typing ```%matplotlib inline``` in the first line of a cell. Notebook Tips and RecommendationsGenerally:- Notebooks are great for: exploratory data analysis (visualize your data in many ways, compute some statistics)), technical blog posts, tutorials, Technical Presentations, Quick and simple experimentation- Notebooks are not so great for complex models, in depth experimentation, compact code sharing for challenge submissions- For these consider using standard python scripts, modularized and reused in suitable way, with a main function callable from a terminalSpecifically:- Always execute notebooks top to bottom- Write long functions and classes (20+ lines) in external files and import them- Use python scripts instead of notebooks for extended experiments- Set and use global parameters at the top of the notebook (if it runs parametrized experiments)- Pay close attention not to overwrite variables/function (for instance if experimenting with a couple alternatives in an experiment) ###Code print("Hello World") ###Output Hello World
Lecture10.ipynb
###Markdown Lecture 10. Metrics and Model Selection for Regression and Classification Regression ###Code import numpy as np import pandas as pd import sklearn import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline housing = pd.read_csv("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv") housing.head() housing.info() housing.dropna(subset = ['total_bedrooms'], inplace=True) y = housing['median_house_value'] housing.drop(columns=['median_house_value'], inplace=True) housing['ocean_proximity'].value_counts() ###Output _____no_output_____ ###Markdown Create a random feature to see how to detect not important features ###Code housing['random'] = (np.random.rand(len(housing))>0.5).astype(int) ###Output _____no_output_____ ###Markdown Split data into Train and Test ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(housing, y, test_size=0.2, random_state=42) ###Output _____no_output_____ ###Markdown Create transformer for the nominal data ###Code from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer transform = ColumnTransformer([('OneHot', OneHotEncoder(drop='first'), ['ocean_proximity'])], remainder='passthrough') transform.fit(X_train) ###Output _____no_output_____ ###Markdown After transformer our data looks as follows ###Code X_train_hot = pd.DataFrame(transform.transform(X_train), columns=transform.get_feature_names_out()) X_train_hot.head() ###Output _____no_output_____ ###Markdown Choosing Data by Correlation, Mutual Information, and by Model $$\operatorname{Cor}(X;Y) = \frac{(X-\bar{X})^T (Y-\bar{Y})}{\sqrt{\operatorname{Var}[X]}\sqrt{\operatorname{Var}[Y]}}$$ Add target variable to the data to see the correlation ###Code cor = pd.concat([X_train_hot, y_train], axis=1).corr() fig, ax = plt.subplots(figsize=(12,12)) sns.heatmap(cor, cmap="RdBu", annot=True) ###Output _____no_output_____ ###Markdown Validation Set ###Code X_train_hot_val, X_hot_val, y_train_val, y_val = train_test_split(X_train_hot, y_train, test_size=0.2, random_state=42) from sklearn.linear_model import SGDRegressor, Lasso, Ridge, ARDRegression, ElasticNet, HuberRegressor from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.feature_selection import SelectKBest, mutual_info_regression, r_regression, f_regression from sklearn.pipeline import Pipeline, make_pipeline ###Output _____no_output_____ ###Markdown Trying Different Models$$MSE = \frac{1}{N}\sum_{i=1}^{N}\left(a(x^{(i)})-y^{(i)}\right)^2$$$$MAE = \frac{1}{N}\sum_{i=1}^{N}\left|a(x^{(i)})-y^{(i)}\right|$$$$Huber = \frac{1}{N}\sum_{i=1}^{N} \phi_{\varepsilon}\left(a(x^{(i)})-y^{(i)}\right), $$where$$\phi_{\varepsilon}(z) =\begin{cases}\dfrac{1}{2} z^2, & |z|<\varepsilon,\\\varepsilon\left( |z|-\dfrac{1}{2}\varepsilon\right), & |z|\geq \varepsilon. \end{cases}$$$l_2$ regularization$$\|w\|_2^2 = \sum_{k=1}^{D} w_k^2$$$l_1$ regularization$$\|w\|_1 = \sum_{k=1}^{D} |w_k|$$Elastic$$a\|w\|_1 + \dfrac{1}{2}b\|w\|_2^2$$ First model is using ridge regression after selecting 9 features with larger correlation ###Code pipe1 = Pipeline([('select', SelectKBest(r_regression, k=9)), ('sc', StandardScaler()), ('reg', Ridge())]) ###Output _____no_output_____ ###Markdown We test it on the validation set ###Code pipe1.fit(X_train_hot_val, y_train_val) pipe1.score(X_hot_val, y_val) pipe1[:-1].get_feature_names_out() ###Output _____no_output_____ ###Markdown Second model is using ridge regression with all fetures ###Code pipe2 = Pipeline([('sc', StandardScaler()), ('reg', Ridge())]) pipe2.fit(X_train_hot_val, y_train_val) pipe2.score(X_hot_val, y_val) ###Output _____no_output_____ ###Markdown Coefficient of Determination$$R^2 = 1-\frac{\sum\limits_{i=1}^{N}\left(a(x^{(i)})-y^{(i)}\right)^2}{\sum\limits_{i=1}^{N}\left(y^{(i)}-\bar{y}\right)^2}$$**Why do we need the scaler?** Cross-Validation ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(pipe1, X_train_hot, y_train, cv=10) print(scores, scores.mean()) ###Output [0.59754886 0.57903001 0.55392081 0.50736668 0.57545052 0.59679177 0.55993243 0.5518245 0.56911289 0.55092614] 0.5641904606442619 ###Markdown Correlation analyzis and cross-validation ###Code corr_select = [] for r in range(1,X_train_hot.shape[1]+1): pipe1 = Pipeline([('sc', StandardScaler()), ('reg', Ridge())]) corr_select.append(cross_val_score(pipe1, SelectKBest(r_regression, k=r).fit_transform(X_train_hot, y_train), y_train, cv=10).mean()) plt.plot(list(range(1, X_train_hot.shape[1]+1)), corr_select) plt.xlabel('Number of features') plt.ylabel('Score $R^2$') ###Output _____no_output_____ ###Markdown Mutual Information$$I(X;Y) = \sum_{y\in Y}\sum_{x\in X} p(x,y) \log\frac{p(x,y)}{p(x)p(y)}$$ ###Code MI_select = [] for r in range(1, X_train_hot.shape[1]+1): pipe2 = Pipeline([('sc', StandardScaler()), ('reg', Ridge())]) MI_select.append(cross_val_score(pipe2, SelectKBest(mutual_info_regression, k=r).fit_transform(X_train_hot, y_train), y_train, cv=10).mean()) plt.plot(list(range(1, X_train_hot.shape[1]+1)), MI_select) ###Output _____no_output_____ ###Markdown We can use the better number of the features selected by MI ###Code selector_MI = SelectKBest(mutual_info_regression, k=9) selector_MI.fit_transform(X_train_hot, y_train) selector_MI.get_feature_names_out() ###Output _____no_output_____ ###Markdown Recursive Feature Elimination ###Code from sklearn.feature_selection import RFE Rec_select = [] for r in range(1, X_train_hot.shape[1]+1): estimator = Lasso() scaler = StandardScaler() selector = RFE(estimator, n_features_to_select=r) selector.fit(scaler.fit_transform(X_train_hot), y_train) Rec_select.append(cross_val_score(estimator, scaler.fit_transform(X_train_hot[selector.get_feature_names_out(X_train_hot.columns.values)]), y_train, cv=10).mean()) plt.plot(list(range(1, X_train_hot.shape[1]+1)), Rec_select) estimator = Ridge() scaler = StandardScaler() selector_Rec = RFE(estimator, n_features_to_select=10) #X_train_hot_scaled = pd.DataFrame(scaler.fit_transform(X_train_hot), columns=X_train_hot.columns.values) selector_Rec.fit(scaler.fit_transform(X_train_hot), y_train) selector_Rec.get_feature_names_out(X_train_hot.columns.values) ###Output _____no_output_____ ###Markdown Creating New Features ###Code #sns.pairplot(pd.concat([housing_hot, y_train], axis=1)) ###Output _____no_output_____ ###Markdown Model Selection Via Grid Search ###Code pipe3 = Pipeline([('select', SelectKBest()),('sc', StandardScaler()), ('reg', Lasso())]) pipe3.get_params() from sklearn.model_selection import GridSearchCV param_grid = {'select__score_func': [mutual_info_regression, f_regression], 'select__k': [8, 9], 'reg__alpha': [0.1, 0.5], } #'reg__penalty': ['l2', 'l1']} for SGDRegressor(max_iter=700,alpha=0.2) grid_search = GridSearchCV(pipe3, param_grid, cv=5) grid_search.fit(X_train_hot, y_train) grid_search.best_estimator_ ###Output _____no_output_____ ###Markdown Final training and testing ###Code best_model = SGDRegressor(alpha=0.001, penalty='elasticnet') best_features = selector_MI.get_feature_names_out().tolist() pipe = Pipeline([('scaler', StandardScaler()), ('model', best_model)]) pipe.fit(X_train_hot[best_features],y_train) X_test_hot = pd.DataFrame(transform.fit_transform(X_test), columns=transform.get_feature_names_out()) ###Output _____no_output_____ ###Markdown Predict for all test records ###Code y_predict = pipe.predict(X_test_hot[best_features]) y_predict[0] ###Output _____no_output_____ ###Markdown Choose one record from the DataFrame (use doble brackets) ###Code X_test.iloc[[0]] ###Output _____no_output_____ ###Markdown Transform it if needed ###Code X_transformed = pd.DataFrame(transform.transform(X_test.iloc[[0]]), columns=transform.get_feature_names_out()) X_transformed ###Output _____no_output_____ ###Markdown Make a prediction for this record ###Code pipe.predict(X_transformed[best_features]), y_test.iloc[0] ###Output _____no_output_____ ###Markdown You could do it by using weights $w_i$ $$a(x) = w_0 + w_1 x_1+\ldots + w_d x_d = w^T \tilde{x},$$where$$\tilde{x} = (1,\ x_1,\ x_2, \ldots,\ x_d)^T$$ ###Code pipe[-1].coef_ pipe[-1].intercept_ pipe[0].transform(X_transformed[best_features]).dot(pipe[-1].coef_) + pipe[-1].intercept_ ###Output _____no_output_____ ###Markdown Summary* Split Train/Validation/Test* or Train/Test and use cross-validation* Feature selection (by correlation/MI or by model)* and probably new features* Regression model selection + its parameters by grid-search* Calculate chosen metric (MSE, MAE, or $R^2$) on the Test set ###Code ###Output _____no_output_____ ###Markdown Regression for ClassificationWe can use $$\operatorname{sign}(a(x)) = \operatorname{sign}\left(w^T x\right) $$to classify our data labeled $y=1$ or $y=-1$.The line (hyperplane)$$w^T x =0$$is called a *decision boundary.***How can we interprete $w^T x$?**$$M = y w^T x$$$M>0$ for correct classification and $M<0$ otherwise. ###Code Data = pd.read_csv('https://raw.githubusercontent.com/anton-selitskiy/The-Art-of-ML/main/scoring.csv') #source: https://github.com/nadiinchi/voronovo_seminar_materials/tree/master/base_track/seminars Data.head() Data.info() Data['target'].value_counts() ###Output _____no_output_____ ###Markdown Because classes are balanced, we can use accuracy as the mertic. ###Code Data['target'] = Data['target'].map({0: -1, 1: 1}) Data.head() X = Data[Data.columns[:-2]] #Don't take into account the thext column y = Data['target'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) X_train_val, X_val, y_train_val, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=42) from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline pipe = Pipeline([('scaler', StandardScaler()), ('model', LinearRegression())]) pipe.fit(X_train_val, y_train_val) y_predict = pipe.predict(X_train_val) from sklearn.metrics import accuracy_score y_predict_val = pipe.predict(X_val) accuracy_score(y_val, np.sign(y_predict_val)) ###Output _____no_output_____ ###Markdown **Task** Create table with features and importance ###Code pd.DataFrame({'feature': X.columns, 'weight': pipe[-1].coef_}).sort_values('weight') ###Output _____no_output_____ ###Markdown Calculate precision and recall for the threshold $th = 0$ ###Code from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, precision_score, recall_score, f1_score CM = confusion_matrix(y_train_val, np.sign(y_predict)) CM ConfusionMatrixDisplay.from_predictions(y_train_val, np.sign(y_predict)) precision_score(y_train_val, np.sign(y_predict)) 134/(134+59) ###Output _____no_output_____ ###Markdown Precision - Recall Curve Comment: we could do model selection using only accuracy or f1 measure, but my goal is to make you familiar with area under the curve (AUC) measure of quality. ###Code pd.DataFrame({'prediction': y_predict[:6], 'target': y_train_val[:6].to_numpy()}).sort_values('prediction', ascending=False) ###Output _____no_output_____ ###Markdown **Right a function to calculate Precision curve** ###Code def prec(y, y_p): alpha = np.linspace(y_p.min(), y_p.max(), num=50)[1:-1] a = [] for th in alpha: a.append(precision_score(y, np.sign(y_p-th))) return alpha, a def rec(y, y_p): alpha = np.linspace(y_p.min(), y_p.max(), num=50)[1:-1] a = [] for th in alpha: a.append(recall_score(y, np.sign(y_p-th))) return alpha, a def f1(y, y_p): alpha = np.linspace(y_p.min(), y_p.max(), num=50)[1:-1] a = [] for th in alpha: a.append(f1_score(y, np.sign(y_p-th))) return alpha, a alpha, p = prec(y_train_val.to_numpy(), y_predict) alpha, r = rec(y_train_val.to_numpy(), y_predict) alpha, f1 = f1(y_train_val.to_numpy(), y_predict) plt.plot(alpha, p, label='Precision') plt.plot(alpha, r, label='Recall') plt.plot(alpha, f1, label='f1') plt.xlabel('threshold') plt.legend() ###Output _____no_output_____ ###Markdown If we want to maximize both precision and recall, we can use $F$-measure$$F_{\beta} = (1+\beta)^2 \frac{precision\times recall}{\beta^2 precision+recall}$$\$$F_1 = \frac{2}{\dfrac{1}{precision}+\dfrac{1}{recall}} = 2\frac{precision\times recall}{precision+recall}$$**Find maximum f1** ###Code np.argmax(f1) alpha[np.argmax(f1)] ConfusionMatrixDisplay.from_predictions(y_train_val, np.sign(y_predict-alpha[np.argmax(f1)])) plt.plot(r,p) from sklearn.metrics import precision_recall_curve, PrecisionRecallDisplay precision, recall, _ = precision_recall_curve(y_train_val, y_predict) disp = PrecisionRecallDisplay(precision=precision, recall=recall) disp.plot() ###Output _____no_output_____ ###Markdown ROC curve$$TPR = \frac{TP}{TP+FN}$$$$FPR = \frac{FP}{FP+TN}$$ ###Code from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_val, y_predict) from sklearn.metrics import RocCurveDisplay RocCurveDisplay.from_predictions(y_train_val, y_predict) ###Output _____no_output_____ ###Markdown Let's try another model: Logistic Regression ###Code from sklearn.linear_model import LogisticRegression pipe1 = Pipeline([('scaler', StandardScaler()), ('model', LogisticRegression())]) pipe1.fit(X_train_val, y_train_val) y_predict1 = pipe1.predict(X_train_val) y_predict1_proba = pipe1.predict_proba(X_train_val) y_val_predict1 = pipe1.predict(X_val) accuracy_score(y_val, y_val_predict1) RocCurveDisplay.from_predictions(y_train_val, y_predict1_proba[:,-1]) ###Output _____no_output_____ ###Markdown Let's try SVM classifier ###Code from sklearn.svm import SVC pipe2 = Pipeline([('scaler', StandardScaler()), ('model', SVC(probability=True))]) pipe2.fit(X_train_val,y_train_val) y_predict2_proba = pipe2.predict_proba(X_train_val) RocCurveDisplay.from_predictions(y_train_val, y_predict2_proba[:,-1]) y_val_predict2 = pipe2.predict(X_val) accuracy_score(y_val, y_val_predict2) ###Output _____no_output_____ ###Markdown We should prefer the last model. Let's see there performance on the Test: ###Code y_test_predict1 = pipe1.predict(X_test) accuracy_score(y_test, y_test_predict1) y_test_predict2 = pipe2.predict(X_test) accuracy_score(y_test, y_test_predict2) ###Output _____no_output_____ ###Markdown Loss Function for Classification$$L = \frac{1}{N} \sum_{i=1}^{N} [y^{(i)} \ne \operatorname{sign}(w^T x) ] = \frac{1}{N} \sum_{i=1}^{N} [y^{(i)}w^T x <0]$$ ###Code x = np.arange(-2,2,0.01).tolist() Los = list(map(lambda x: 0 if x>0 else 1, x)) plt.plot(x,Los) plt.title('Loss on one object for hard classification') plt.xlabel('Margin') plt.ylabel('Loss') ###Output _____no_output_____ ###Markdown $$L = \frac{1}{N} \sum_{i=1}^{N} [y^{(i)}w^T x <0] \leq \frac{1}{N} \sum_{i=1}^{N} \log \left(1 + e^{-y^{(i)}w^T x}\right)$$ ###Code Log_Los = list(map(lambda x: np.log2(1+np.exp(-x)), x)) plt.plot(x,Los, label='Loss') plt.plot(x,Log_Los, label='Log_Loss') plt.title('Loss on one object for hard classification') plt.xlabel('Margin') plt.ylabel('Loss') plt.legend() SVM_Los = list(map(lambda x: np.max([0, 1-x]), x)) plt.plot(x,Los, label='Loss') plt.plot(x,SVM_Los, label='SVM_Loss') plt.title('Loss on one object for hard classification') plt.xlabel('Margin') plt.ylabel('Loss') plt.legend() ###Output _____no_output_____ ###Markdown $$L = \frac{1}{N} \sum_{i=1}^{N} [y^{(i)}w^T x <0] \leq \frac{1}{N} \sum_{i=1}^{N} \max \left(0,\ 1 -y^{(i)}w^T x\right)$$ Logistic Regression as Probabilistic ModelConsider the function $\sigma\colon \mathbb{R}\to [0;\ 1]$$$\sigma(x) = \frac{e^x}{1+e^x} = \frac{1}{1+e^{-x}}$$If we take $\sigma$ of the margin as probability, then maximum likelihood estimation takes form$$\prod_{i=1}^{N} \frac{1}{1+e^{-y^{(i)}w^T x^{(i)}}} \to \underset{w}{\max}$$It is possible to prove that it is a real probability, i.e., it coinsides with fractions of positive samples in bins. ###Code ###Output _____no_output_____ ###Markdown Calibration of Models ###Code from sklearn.svm import SVC from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler clf_SVM = make_pipeline(StandardScaler(), SVC(gamma='auto', probability=True)) clf_LogReg = make_pipeline(StandardScaler(), LogisticRegression(C=1)) #make_pipeline(StandardScaler(), LinearSVC(C=0.5, max_iter=5000, random_state=0, tol=1e-5)) from matplotlib.gridspec import GridSpec from sklearn.calibration import CalibratedClassifierCV, CalibrationDisplay clf_list = [ (clf_LogReg, "Logistic"), (clf_SVM, "SVM"), ] fig = plt.figure(figsize=(10, 10)) gs = GridSpec(4, 2) colors = plt.cm.get_cmap("Dark2") ax_calibration_curve = fig.add_subplot(gs[:2, :2]) calibration_displays = {} for i, (clf, name) in enumerate(clf_list): clf.fit(X, y) display = CalibrationDisplay.from_estimator( clf, X, y, n_bins=10, name=name, ax=ax_calibration_curve, color=colors(i), ) calibration_displays[name] = display ax_calibration_curve.grid() ax_calibration_curve.set_title("Calibration plots (Support Vector Machine)") ###Output _____no_output_____ ###Markdown 第10回目の授業中練習問題の解答例 イテレータ1. `range`を使って、1から10までを表示されたループを作りなさい。 ###Code for i in range (1,11): print(i) ###Output 1 2 3 4 5 6 7 8 9 10 ###Markdown 2. 下記のリストから一個ずつの数字が表示できるように、ループを作りなさい。`[1,2,3,4,5]` ###Code for number in [1, 2, 3, 4, 5]: print (number) ###Output 1 2 3 4 5 ###Markdown 3. `iter`と`next`関数を使って、2番目のリストの各数字を表示せよ。 ###Code I = iter([1,2,3,4,5]) print(next(I)) print(next(I)) print(next(I)) print(next(I)) print(next(I)) ###Output 1 2 3 4 5 ###Markdown 4. 下記のコマンドを試してみなさい。`range(10)` `iter(range(10))` ###Code range(10) iter(range(10)) ###Output _____no_output_____ ###Markdown 5. `enumerate` 関数を使って、下記のリストをループし、各インデクスと値を表示せよ。`L = [1,2,3,4,5]` ###Code L = [1,2,3,4,5] for index, value in enumerate(L): print(index, value) for i in range(len(L)): print(i, L[i]) ###Output 0 1 1 2 2 3 3 4 4 5 ###Markdown 6. 二つのリストがあり、リスト2の各値をリスト1の右側に表示しなさい。`L1 = [1,2,3,4,5]` `L2 = [6,7,8,9,10]` ヒント:`zip`の関数を使う。 ###Code L1 = [1,2,3,4,5] L2 = [6,7,8,9,10] for val_1, val_2 in zip(L1,L2): print(val_1,val_2) ###Output 1 6 2 7 3 8 4 9 5 10 ###Markdown 7.`*`を使っても、繰り返せる。例:`*range()` ###Code print(*range(10)) ###Output 0 1 2 3 4 5 6 7 8 9 ###Markdown 8. `itertools`関数は数学の関数にも使える。例:```from itertools import permutationsp = permutations (3)print(*p)```3の順列を計算し、表示させる。 ###Code from itertools import permutations p = permutations(range(3)) print(*p) ###Output (0, 1, 2) (0, 2, 1) (1, 0, 2) (1, 2, 0) (2, 0, 1) (2, 1, 0) ###Markdown 9. `zip`関数を辞書で使いましょう。下記のリストから辞書を作りなさい。`names = ["Barney", "Robin", "Ted", "Lily", "Marshall"]` `age = [16, 20, 24, 18, 30]` ###Code names = ["Barney", "Robin", "Ted", "Lily", "Marshall"] age = [16, 20, 24, 18, 30] people = dict(zip(names, age)) print(people) ###Output {'Barney': 16, 'Robin': 20, 'Ted': 24, 'Lily': 18, 'Marshall': 30} ###Markdown 10. 9番目で作った辞書をタプルを作って、解凍しなさい。 ###Code cast = (("Barney", 16), ("Robin", 20), ("Ted", 24), ("Lily", 18), ("Marshall", 30)) # define names and heights here names, age = zip(*cast) print(names) print(age) ###Output ('Barney', 'Robin', 'Ted', 'Lily', 'Marshall') (16, 20, 24, 18, 30) ###Markdown リスト内包表記1. ループを使って1から200までの偶数リストを作りなさい。 ###Code numbers = [] for number in range(1,200): if number % 2 == 0: numbers.append(number) print(numbers) ###Output [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198] ###Markdown 2. リスト内包表記を使って、1番目と同じような結果が出るようにコードを書きなさい。 ###Code numbers = [number for number in range(1,200) if number % 2 == 0] print(numbers) ###Output [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198] ###Markdown 3. 以下の出力が出るように、リスト内包表記を使用して、コードを書きなさい。```[(0,0), (0,1), (1,0), (1,1)]``` ###Code L = [(i,j) for i in range(2) for j in range(2)] print(L) ###Output [(0, 0), (0, 1), (1, 0), (1, 1)] ###Markdown 4. リスト内包表記を使って、$n^2$の結果を表示しなさい。$n=1,...,10$とし、$n = n^2$ ###Code squares = [n ** 2 for n in range(11)] print(squares) ###Output [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100] ###Markdown 5. 下記の辞書に基づいて、リスト内包表記を使って、60以上のスコアを持っている人のリストを作りなさい。```scores = { "Tanaka": 20, "Kimura": 60, "Li": 89, "Kim": 78, "Albert": 73 }``` ###Code scores = { "Tanaka": 20, "Kimura": 60, "Li": 89, "Kim": 78, "Beth Smith": 98 } passed = [name for name, score in scores.items() if score >= 60] print(passed) ###Output ['Kimura', 'Li', 'Kim', 'Beth Smith'] ###Markdown ジェネレータ1. リスト内包表記とジェネレータを使用して、以下のリストを作りなさい。以下のリストは、1から40までの4倍数となる。```[4, 8, 12, 16, 20, 24, 28, 32, 36, 40]``` ###Code numbers = [n * 4 for n in range(1,11)] print(numbers) numbers = (n * 4 for n in range(1,11)) print(numbers) print (list(numbers)) ###Output <generator object <genexpr> at 0x7f6b30ff0fc0> [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] ###Markdown 2. ジェネレータを使って、1から20のループを10の後にいったん止めて、またループを続ける。出力結果は下記のようである。```1 2 3 4 5 6 7 8 9 10いったん止める11 12 13 14 15 16 17 18 19 20``` ###Code G = (n for n in range(1,21)) for n in G: print (n, end=' ') if n >= 10: break print("\nいったん止める") for n in G: print(n, end=' ') ###Output 1 2 3 4 5 6 7 8 9 10 いったん止める 11 12 13 14 15 16 17 18 19 20 ###Markdown 3. `yield`を使って、最初に書いたジェネレータ表記の二つ目のジェネレータ関数を書きなさい。出力結果は以下となります。```[4, 8, 12, 16, 20, 24, 28, 32, 36, 40][4, 8, 12, 16, 20, 24, 28, 32, 36, 40]```リスト内包表記を使って、コードを書き、上のジェネレータ関数と比較しなさい。 ###Code numbers_1 = (n * 4 for n in range(1,11)) def gen(): for n in range(1,11): yield n * 4 numbers_2 = gen() a = list(numbers_1) b = list(numbers_2) print(a) print(b) L1 = [n * 4 for n in range(1,11)] L2 = [] for n in range(1,11): L2.append(n * 4) print(L1) print(L2) ###Output [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] ###Markdown 4. 100までの素数リストをジェネレータ関数を作りなさい。素数とは、1 より大きい自然数で、正の約数が 1 と自分自身のみであるもののことである。そして、`if`と`for`ループを使って100までの素数を表示しなさい。 ###Code def gen_primes(N): primes = set() for n in range(2, N): if all(n % p > 0 for p in primes): primes.add(n) yield n print(list(gen_primes(100))) primes = [] for possiblePrime in range(2, 101): # Assume number is prime until shown it is not. isPrime = True for num in range(2, possiblePrime): if possiblePrime % num == 0: isPrime = False if isPrime: primes.append(possiblePrime) print(primes) ###Output [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] ###Markdown Lecture 10 - eigenvalues and eigenvectorsAn eigenvector $\boldsymbol{x}$ and corrsponding eigenvalue $\lambda$ of a square matrix $\boldsymbol{A}$ satisfy$$\boldsymbol{A} \boldsymbol{x} = \lambda \boldsymbol{x}$$Rearranging this expression,$$\left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) \boldsymbol{x} = \boldsymbol{0}$$The above equation has solutions (other than $\boldsymbol{x} = \boldsymbol{0}$) if$$\det \left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) = 0$$Computing the determinant of an $n \times n$ matrix requires solution of an $n$th degree polynomial. It is known how to compute roots of polynomials up to and including degree four (e.g., see ). For matrices with $n > 4$, numerical methods must be used to compute eigenvalues and eigenvectors.An $n \times n$ will have $n$ eigenvalue/eigenvector pairs (eigenpairs). Computing eigenvalues with NumPyNumPy provides a function to compute eigenvalues and eigenvectors. To demonstrate how to compute eigpairs, we first create a $5 \times 5$ symmetric matrix: ###Code # Import NumPy and seed random number generator to make generated matrices deterministic import numpy as np np.random.seed(1) # Create a symmetric matrix with random entries A = np.random.rand(5, 5) A = A + A.T print(A) ###Output [[0.83404401 0.81266309 0.41930889 0.97280008 0.94750046] [0.81266309 0.37252042 1.03078023 0.81407228 1.50707831] [0.41930889 1.03078023 0.4089045 1.43680726 0.34081177] [0.97280008 0.81407228 1.43680726 0.28077388 0.8904241 ] [0.94750046 1.50707831 0.34081177 0.8904241 1.7527783 ]] ###Markdown We can compute the eigenvectors and eigenvalues using the NumPy function `linalg.eig` ###Code # Compute eigenvectors of A evalues, evectors = np.linalg.eig(A) print("Eigenvalues: {}".format(evalues)) print("Eigenvectors: {}".format(evectors)) ###Output Eigenvalues: [ 4.49901636 -1.34808792 -0.66778843 0.21610602 0.94977508] Eigenvectors: [[-0.40163425 -0.19049617 0.13563534 -0.88537464 -0.01076773] [-0.45887678 0.38587861 0.76218267 0.24145961 0.03611968] [-0.35255653 -0.62923828 0.03786448 0.30864498 -0.61892459] [-0.42177956 0.60360849 -0.53501774 -0.01546805 -0.41385451] [-0.57090098 -0.23350566 -0.33615908 0.24961576 0.66651049]] ###Markdown The $i$th column of `evectors` is the $i$th eigenvector. Symmetric matricesNote that the above eigenvalues and the eigenvectors are real valued. This is always the case for symmetric matrices. Another features of symmetric matrices is that the eigenvectors are orthogonal. We can verify this for the above matrix: We can also check that the second eigenpair is indeed an eigenpair (Python/NumPy use base 0, so the second eiegenpair has index 1): ###Code import itertools # Build pairs (0,0), (0,1), . . . (0, n-1), (1, 2), (1, 3), . . . pairs = itertools.combinations_with_replacement(range(len(evectors)), 2) # Compute dot product of eigenvectors x_{i} \cdot x_{j} for p in pairs: e0, e1 = p[0], p[1] print ("Dot product of eigenvectors {}, {}: {}".format(e0, e1, evectors[:, e0].dot(evectors[:, e1]))) print("Testing Ax and (lambda)x: \n {}, \n {}".format(A.dot(evectors[:,1]), evalues[1]*evectors[:,1])) ###Output Testing Ax and (lambda)x: [ 0.25680558 -0.5201983 0.84826852 -0.81371731 0.31478616], [ 0.25680558 -0.5201983 0.84826852 -0.81371731 0.31478616] ###Markdown Non-symmetric matricesIn general, the eigenvalues and eigenvectors of a non-symmetric, real-valued matrix are complex. We can see this by example: ###Code B = np.random.rand(5, 5) evalues, evectors = np.linalg.eig(B) print("Eigenvalues: {}".format(evalues)) print("Eigenvectors: {}".format(evectors)) ###Output Eigenvalues: [ 2.43827549+0.j -0.7356488 +0.j 0.95516424+0.j 0.20592847+0.25009345j 0.20592847-0.25009345j] Eigenvectors: [[-0.31712315+0.j -0.3448319 +0.j -0.48360642+0.j 0.42721133-0.23453391j 0.42721133+0.23453391j] [-0.50795246+0.j -0.55615526+0.j 0.29477162+0.j -0.25335016+0.19638993j -0.25335016-0.19638993j] [-0.46645352+0.j 0.01776781+0.j 0.53639466+0.j 0.38172586-0.30291015j 0.38172586+0.30291015j] [-0.52413653+0.j 0.45219892+0.j -0.62472838+0.j -0.55565352+0.j -0.55565352-0.j ] [-0.38615957+0.j 0.605791 +0.j 0.03506771+0.j -0.15322242+0.30005314j -0.15322242-0.30005314j]] ###Markdown Unlike symmetric matrices, the eigenvectors are in general not orthogonal, which we can test: ###Code # Compute dot product of eigenvectors x_{i} \cdot x_{j} pairs = itertools.combinations_with_replacement(range(len(evectors)), 2) for p in pairs: e0, e1 = p[0], p[1] print ("Dot product of eigenvectors {}, {}: {}".format(e0, e1, evectors[:, e0].dot(evectors[:, e1]))) ###Output Dot product of eigenvectors 0, 0: (0.9999999999999998+0j) Dot product of eigenvectors 0, 1: (-0.08737921508729499+0j) Dot product of eigenvectors 0, 2: (0.06733087339766464+0j) Dot product of eigenvectors 0, 3: (0.16556046975206345+4.450334363795272e-05j) Dot product of eigenvectors 0, 4: (0.16556046975206345-4.450334363795272e-05j) Dot product of eigenvectors 1, 1: (1+0j) Dot product of eigenvectors 1, 2: (-0.24890309336077585+0j) Dot product of eigenvectors 1, 3: (-0.3437183199465273+0.14803892356676399j) Dot product of eigenvectors 1, 4: (-0.3437183199465273-0.14803892356676399j) Dot product of eigenvectors 2, 2: (0.9999999999999998+0j) Dot product of eigenvectors 2, 3: (0.2652324961688288+0.019355072804240914j) Dot product of eigenvectors 2, 4: (0.2652324961688288-0.019355072804240914j) Dot product of eigenvectors 3, 3: (0.44927678773939966-0.6231089392651208j) Dot product of eigenvectors 3, 4: (1.0000000000000004+0j) Dot product of eigenvectors 4, 4: (0.44927678773939966+0.6231089392651208j) ###Markdown 第10回目の授業中練習問題の解答例 イテレータ1. `range`を使って、1から10までを表示されたループを作りなさい。 ###Code for i in range (1,11): print(i) ###Output 1 2 3 4 5 6 7 8 9 10 ###Markdown 2. 下記のリストから一個ずつの数字が表示できるように、ループを作りなさい。`[1,2,3,4,5]` ###Code for number in [1, 2, 3, 4, 5]: print (number) ###Output 1 2 3 4 5 ###Markdown 3. `iter`と`next`関数を使って、2番目のリストの各数字を表示せよ。 ###Code I = iter([1,2,3,4,5]) print(next(I)) print(next(I)) print(next(I)) print(next(I)) print(next(I)) ###Output 1 2 3 4 5 ###Markdown 4. 下記のコマンドを試してみなさい。`range(10)` `iter(range(10))` ###Code range(10) iter(range(10)) ###Output _____no_output_____ ###Markdown 5. `enumerate` 関数を使って、下記のリストをループし、各インデクスと値を表示せよ。`L = [1,2,3,4,5]` ###Code L = [1,2,3,4,5] for index, value in enumerate(L): print(index, value) for i in range(len(L)): print(i, L[i]) ###Output 0 1 1 2 2 3 3 4 4 5 ###Markdown 6. 二つのリストがあり、リスト2の各値をリスト1の右側に表示しなさい。`L1 = [1,2,3,4,5]` `L2 = [6,7,8,9,10]` ヒント:`zip`の関数を使う。 ###Code L1 = [1,2,3,4,5] L2 = [6,7,8,9,10] for val_1, val_2 in zip(L1,L2): print(val_1,val_2) ###Output 1 6 2 7 3 8 4 9 5 10 ###Markdown 7.`*`を使っても、繰り返せる。例:`*range()` ###Code print(*range(10)) ###Output 0 1 2 3 4 5 6 7 8 9 ###Markdown 8. `itertools`関数は数学の関数にも使える。例:```from itertools import permutationsp = permutations (3)print(*p)```3の順列を計算し、表示させる。 ###Code from itertools import permutations p = permutations(range(3)) print(*p) ###Output (0, 1, 2) (0, 2, 1) (1, 0, 2) (1, 2, 0) (2, 0, 1) (2, 1, 0) ###Markdown 9. `zip`関数を辞書で使いましょう。下記のリストから辞書を作りなさい。`names = ["Barney", "Robin", "Ted", "Lily", "Marshall"]` `age = [16, 20, 24, 18, 30]` ###Code names = ["Barney", "Robin", "Ted", "Lily", "Marshall"] age = [16, 20, 24, 18, 30] people = dict(zip(names, age)) print(people) ###Output {'Barney': 16, 'Robin': 20, 'Ted': 24, 'Lily': 18, 'Marshall': 30} ###Markdown 10. 9番目で作った辞書をタプルを作って、解凍しなさい。 ###Code cast = (("Barney", 16), ("Robin", 20), ("Ted", 24), ("Lily", 18), ("Marshall", 30)) # define names and heights here names, age = zip(*cast) print(names) print(age) ###Output ('Barney', 'Robin', 'Ted', 'Lily', 'Marshall') (16, 20, 24, 18, 30) ###Markdown リスト内包表記1. ループを使って1から200までの偶数リストを作りなさい。 ###Code numbers = [] for number in range(1,200): if number % 2 == 0: numbers.append(number) print(numbers) ###Output [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198] ###Markdown 2. リスト内包表記を使って、1番目と同じような結果が出るようにコードを書きなさい。 ###Code numbers = [number for number in range(1,200) if number % 2 == 0] print(numbers) ###Output [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198] ###Markdown 3. 以下の出力が出るように、リスト内包表記を使用して、コードを書きなさい。```[(0,0), (0,1), (1,0), (1,1)]``` ###Code L = [(i,j) for i in range(2) for j in range(2)] print(L) ###Output [(0, 0), (0, 1), (1, 0), (1, 1)] ###Markdown 4. リスト内包表記を使って、$n^2$の結果を表示しなさい。$n=1,...,10$とし、$n = n^2$ ###Code squares = [n ** 2 for n in range(11)] print(squares) ###Output [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100] ###Markdown 5. 下記の辞書に基づいて、リスト内包表記を使って、60以上のスコアを持っている人のリストを作りなさい。```scores = { "Tanaka": 20, "Kimura": 60, "Li": 89, "Kim": 78, "Albert": 73 }``` ###Code scores = { "Tanaka": 20, "Kimura": 60, "Li": 89, "Kim": 78, "Beth Smith": 98 } passed = [name for name, score in scores.items() if score >= 60] print(passed) ###Output ['Kimura', 'Li', 'Kim', 'Beth Smith'] ###Markdown ジェネレータ1. リスト内包表記とジェネレータを使用して、以下のリストを作りなさい。以下のリストは、1から40までの4倍数となる。```[4, 8, 12, 16, 20, 24, 28, 32, 36, 40]``` ###Code numbers = [n * 4 for n in range(1,11)] print(numbers) numbers = (n * 4 for n in range(1,11)) print(numbers) print (list(numbers)) ###Output <generator object <genexpr> at 0x7f6b30ff0fc0> [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] ###Markdown 2. ジェネレータを使って、1から20のループを10の後にいったん止めて、またループを続ける。出力結果は下記のようである。```1 2 3 4 5 6 7 8 9 10いったん止める11 12 13 14 15 16 17 18 19 20``` ###Code G = (n for n in range(1,21)) for n in G: print (n, end=' ') if n >= 10: break print("\nいったん止める") for n in G: print(n, end=' ') ###Output 1 2 3 4 5 6 7 8 9 10 いったん止める 11 12 13 14 15 16 17 18 19 20 ###Markdown 3. `yield`を使って、最初に書いたジェネレータ表記の二つ目のジェネレータ関数を書きなさい。出力結果は以下となります。```[4, 8, 12, 16, 20, 24, 28, 32, 36, 40][4, 8, 12, 16, 20, 24, 28, 32, 36, 40]```リスト内包表記を使って、コードを書き、上のジェネレータ関数と比較しなさい。 ###Code numbers_1 = (n * 4 for n in range(1,11)) def gen(): for n in range(1,11): yield n * 4 numbers_2 = gen() a = list(numbers_1) b = list(numbers_2) print(a) print(b) L1 = [n * 4 for n in range(1,11)] L2 = [] for n in range(1,11): L2.append(n * 4) print(L1) print(L2) ###Output [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] [4, 8, 12, 16, 20, 24, 28, 32, 36, 40] ###Markdown 4. 100までの素数リストをジェネレータ関数を作りなさい。素数とは、1 より大きい自然数で、正の約数が 1 と自分自身のみであるもののことである。そして、`if`と`for`ループを使って100までの素数を表示しなさい。 ###Code def gen_primes(N): primes = set() for n in range(2, N): if all(n % p > 0 for p in primes): primes.add(n) yield n print(list(gen_primes(100))) primes = [] for possiblePrime in range(2, 101): # Assume number is prime until shown it is not. isPrime = True for num in range(2, possiblePrime): if possiblePrime % num == 0: isPrime = False if isPrime: primes.append(possiblePrime) print(primes) ###Output [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
temp/Wikipathways+Access.ipynb
###Markdown This WAS an example to fetch KEGG pathways. Here we convert it to Wikipathways access.RequirmentsIn addition to basic cyREST setup, you need to install the following Cytoscpae App to run this workflow:Wikipathways appInput and OutputInput - Disease nameOutput - Cytoscape session file containing all KEGG pathways known to be related to the disease. containing all KEGG pathways known to be related to the disease. ###Code import requests import json import pandas as pd import io from IPython.display import Image # Basic Setup PORT_NUMBER = 1234 BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/' WP_API_URL = 'http://webservice.wikipathways.org/' # Header for posting data to the server as JSON HEADERS = {'Content-Type: application/json'} # Delete all networks in current session requests.delete(BASE + 'session') import xmltodict #import untangle # Find information about cancer from Wikipathways. query = 'cancer' res = requests.get(WP_API_URL + 'findPathwaysByText?query=' + query) pathway_list = res.content.decode('utf8') doc = xmltodict.parse(pathway_list) response = doc['ns1:findPathwaysByTextResponse'] print ('RESPONSE: ' + str(response.keys())) result = response['ns1:result'] print ('RESULT: ' + str(result['ns2:score'])) #for line in result.keys(): # print (line) #print (doc[ns1::result][ns2:score]) #disease_df = pd.read_csv(io.StringIO(pathway_list), delimiter='\t', header=None, names=['id', 'name']) #disease_df ###Output RESPONSE: odict_keys(['@xmlns:ns1', '@xmlns:ns2', 'ns1:result']) ###Markdown Piping the result to KEGG Pathway database to get list of related pathways ###Code disease_ids = disease_df['id'] disease_urls = disease_ids.apply(lambda x: KEGG_API_URL + 'get/' + x) def disease_parser(entry): lines = entry.split('\n') data = {} last_key = None for line in lines: if '///' in line: return data parts = line.split(' ') if parts[0] is not None and len(parts[0]) != 0: last_key = parts[0] data[parts[0]] = line.replace(parts[0], '').strip() else: last_val = data[last_key] data[last_key] = last_val + '|' + line.strip() return data result = [] for url in disease_urls: res = requests.get(url) rows = disease_parser(res.content.decode('utf8')) result.append(rows) disease_df = pd.DataFrame(result) pathways = disease_df['PATHWAY'].dropna().unique() p_urls = [] for pathway in pathways: entries = pathway.split('|') for en in entries: url = KEGG_API_URL + 'get/' + en.split(' ')[0].split('(')[0] + '/kgml' p_urls.append(url) def create_from_list(network_list): server_res = requests.post(BASE + 'networks?source=url&collection=' + query, data=json.dumps(network_list), headers=HEADERS) return server_res.json() url_list = list(set(p_urls)) pathway_suids = create_from_list(url_list) ###Output _____no_output_____
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_apply.ipynb
###Markdown graphblas.matrix_applyThis example will go over how to use the `--graphblas-lower` pass from `graphblas-opt` to lower the `graphblas.matrix_apply` op.Let’s first import some necessary modules and generate an instance of our JIT engine. ###Code import mlir_graphblas import mlir_graphblas.sparse_utils import numpy as np engine = mlir_graphblas.MlirJitEngine() ###Output _____no_output_____ ###Markdown Here are the passes we'll use. ###Code passes = [ "--graphblas-lower", "--sparsification", "--sparse-tensor-conversion", "--linalg-bufferize", "--func-bufferize", "--tensor-bufferize", "--tensor-constant-bufferize", "--finalizing-bufferize", "--convert-linalg-to-loops", "--convert-scf-to-std", "--convert-std-to-llvm", ] ###Output _____no_output_____ ###Markdown Similar to our examples using the GraphBLAS dialect, we'll need some helper functions to convert sparse tensors to dense tensors. ###Code mlir_text = """ #trait_densify_csr = { indexing_maps = [ affine_map<(i,j) -> (i,j)>, affine_map<(i,j) -> (i,j)> ], iterator_types = ["parallel", "parallel"] } #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> func @csr_densify4x4(%argA: tensor<4x4xf64, #CSR64>) -> tensor<4x4xf64> { %output_storage = constant dense<0.0> : tensor<4x4xf64> %0 = linalg.generic #trait_densify_csr ins(%argA: tensor<4x4xf64, #CSR64>) outs(%output_storage: tensor<4x4xf64>) { ^bb(%A: f64, %x: f64): linalg.yield %A : f64 } -> tensor<4x4xf64> return %0 : tensor<4x4xf64> } """ ###Output _____no_output_____ ###Markdown Let's compile our MLIR code. ###Code engine.add(mlir_text, passes) ###Output _____no_output_____ ###Markdown Overview of graphblas.matrix_applyHere, we'll show how to use the `graphblas.matrix_apply` op. `graphblas.matrix_apply` takes 1 sparse matrix operand in CSR format, a [thunk](https://en.wikipedia.org/wiki/Thunk) operand, and an `apply_operator` attribute. `graphblas.matrix_apply` applies element-wise the function indicated by the `apply_operator` attribute to each element and the thunk. The result will be a CSR matrix.Here's an example use of the `graphblas.matrix_apply` op:```%answer = graphblas.matrix_apply %sparse_tensor, %thunk { apply_operator = "min" } : (tensor, f64) to tensor```The only currently supported option for the `apply_operator` attribute is "min".Note that `graphblas.matrix_apply` will fail if the given sparse matrix is not in CSR format.Let's create an example input CSR matrix. ###Code indices = np.array( [ [0, 3], [1, 3], [2, 0], [3, 0], ], dtype=np.uint64, ) values = np.array([111, 222, 333, 444], dtype=np.float64) sizes = np.array([4, 4], dtype=np.uint64) sparsity = np.array([False, True], dtype=np.bool8) csr_matrix = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity) dense_matrix = engine.csr_densify4x4(csr_matrix) dense_matrix ###Output _____no_output_____ ###Markdown graphblas.matrix_apply (Min)Here, we'll clip the values of a sparse matrix to be no higher than a given limit. ###Code mlir_text = """ #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> module { func @clip(%sparse_tensor: tensor<?x?xf64, #CSR64>, %limit: f64) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_apply %sparse_tensor, %limit { apply_operator = "min" } : (tensor<?x?xf64, #CSR64>, f64) to tensor<?x?xf64, #CSR64> return %answer : tensor<?x?xf64, #CSR64> } } """ engine.add(mlir_text, passes) sparse_result = engine.clip(csr_matrix, 200) engine.csr_densify4x4(sparse_result) ###Output _____no_output_____ ###Markdown The result looks sane. Let's verify that it has the same behavior as NumPy. ###Code expected_result = dense_matrix.copy() expected_result[expected_result>200] = 200 np.all(expected_result == engine.csr_densify4x4(sparse_result)) ###Output _____no_output_____
research/notebooks/BPC-GPC-LQR Comparison on Sine Noise.ipynb
###Markdown Defining the LDS ###Code import jax import jax.numpy as np import pandas as pd import numpy as onp import numpy.random as random import seaborn as sns import matplotlib.pyplot as plt from scipy.linalg import solve_discrete_are as dare from jax import jit, grad from tqdm import tqdm # LDS specification n, m, A, B = 2, 1, np.array([[1., 1.], [0., 1.]]), np.array([[0.], [1.]]) Q, R = np.eye(N = n), np.eye(N = m) x0, T = np.zeros((n, 1)), 1000 alg_name = ['No Control', 'LQR/H2Control', 'HinfControl', 'GPC', 'BPC', 'OGRWControl'] color_code = {'No Control': 'orange', 'LQR/H2Control': 'blue', 'HinfControl': 'green', 'GPC': 'red', 'BPC': 'purple', 'OGRWControl': 'black'} quad_cost = lambda x, u: np.sum(x.T @ Q @ x + u.T @ R @ u) # Func: Evaluate a given policy def evaluate(controller, W, cost_fn): x, loss = x0, [0. for _ in range(T)] for t in range(T): u = controller.act(x) loss[t] = cost_fn(x, u) x = A @ x + B @ u + W[t] return np.array(loss, dtype=np.float32) ###Output /Users/johnhallman/mlcourse/mlenv/lib/python3.6/site-packages/jax/lib/xla_bridge.py:120: UserWarning: No GPU/TPU found, falling back to CPU. warnings.warn('No GPU/TPU found, falling back to CPU.') ###Markdown No Control, LQR, H-inf, GPC ###Code # Run zero control class ZeroControl: def __init__(self): pass def act(self,x): return np.zeros((m, 1)) # Solve H2 Control class H2Control: def __init__(self, A, B, Q, R): P = dare(A, B, Q, R) self.K = np.linalg.inv(R + B.T @ P @ B) @ (B.T @ P @ A) def act(self, x): return -self.K @ x # Solve the non-stationary/finite-horizon version for H2 Control class H2ControlNonStat: def __init__(self, A, B, Q, R, T): n, m = B.shape P, self.K, self.t = [np.zeros((n,n)) for _ in range(T+1)], [np.zeros((m, n)) for _ in range(T)], 0 P[T] = Q for t in range(T-1, -1, -1): P[t] = Q + A.T @ P[t+1] @ A - A.T @ P[t+1] @ B @ np.linalg.inv(R + B.T @ P[t+1] @ B) @ B.T @ P[t+1] @ A self.K[t] = np.linalg.inv(R + B.T @ P[t] @ B) @ B.T @ P[t] @ A def act(self, x): u = -self.K[self.t] @ x self.t += 1 return u # Solve H2 Control for Random Walk class ExtendedH2Control: def __init__(self, A, B, Q, R, T): Aprime = onp.block([[A, np.eye(n)], [np.zeros((n,n)), np.eye(n)]]) Bprime = onp.block([[B], [np.zeros((n,m))]]) Qprime = onp.block([[Q, np.zeros((n,n))], [np.zeros((n,n)), np.zeros((n,n))]]) Rprime = R self.A, self.B = A, B self.H2 = H2ControlNonStat(Aprime, Bprime, Qprime, Rprime, T) self.x, self.u = np.zeros((n,1)), np.zeros((m,1)) def act(self, x): W = x - self.A @ self.x - self.B @ self.u self.x = x self.u = self.H2.act(onp.block([[x],[W]])) return self.u # Solve Hinf Control class HinfControl: def __init__(self, A, B, Q, R, T, gamma): P, self.K, self.W, self.t = [np.zeros((n, n)) for _ in range(T+1)], [np.zeros((m, n)) for _ in range(T)], [np.zeros((n,n)) for _ in range(T)], 0 P[T] = Q for t in range(T-1, -1, -1): P[t] = Q + A.T @ np.linalg.inv(np.linalg.inv(P[t+1]) + B @ np.linalg.inv(R) @ B.T - gamma**2 * np.eye(n)) @ A Lambda = np.eye(n) + (B @ np.linalg.inv(R) @ B.T - gamma**2 * np.eye(n)) @ P[t+1] self.K[t] = np.linalg.inv(R) @ B.T @ P[t+1] @ np.linalg.inv(Lambda) @ A self.W[t] = (gamma**2)*P[t+1] @ np.linalg.inv(Lambda) @ A def act(self, x): u = self.K[self.t] @ x self.t += 1 return u # GPC definition class GPC: def __init__(self, A, B, Q, R, x0, M, H, lr, cost_fn): n, m = B.shape self.lr, self.A, self.B, self.M = lr, A, B, M self.x, self.u, self.off, self.t = x0, np.zeros((m, 1)), np.zeros((m, 1)), 1 self.K, self.E, self.W = H2Control(A, B, Q, R).K, np.zeros((M, m, n)), np.zeros((H+M, n, 1)) def counterfact_loss(E, W): y = np.zeros((n, 1)) for h in range(H-1): v = -self.K @ y + np.tensordot(E, W[h : h + M], axes = ([0, 2], [0, 1])) y = A @ y + B @ v + W[h + M] v = -self.K @ y + np.tensordot(E, W[h : h + M], axes = ([0, 2], [0, 1])) cost = cost_fn(y, v) return cost self.grad = jit(grad(counterfact_loss)) def act(self, x): # 1. Get new noise self.W = jax.ops.index_update(self.W, 0, x - self.A @ self.x - self.B @ self. u) self.W = np.roll(self.W, -1, axis = 0) # 2. Get gradients delta_E = self.grad(self.E, self.W) # 3. Execute updates self.E -= self.lr * delta_E #self.off -= self.lr * delta_off # 4. Update x & t and get action self.x, self.t = x, self.t + 1 self.u = -self.K @ x + np.tensordot(self.E, self.W[-self.M:], axes = ([0, 2], [0, 1])) #+ self.off return self.u # BPC definition class BPC: def __init__(self, A, B, Q, R, x0, M, H, lr, delta, cost_fn): n, m = B.shape self.n, self.m = n, m self.lr, self.A, self.B, self.M = lr, A, B, M self.x, self.u, self.delta, self.t = x0, np.zeros((m, 1)), delta, 0 self.K, self.E, self.W = H2Control(A, B, Q, R).K, np.zeros((M, m, n)), np.zeros((M, n, 1)) self.cost_fn = cost_fn self.off = np.zeros((m, 1)) def _generate_uniform(shape, norm=1.00): v = random.normal(size=shape) v = norm * v / np.linalg.norm(v) return v self._generate_uniform = _generate_uniform self.eps = self._generate_uniform((M, M, m, n)) self.eps_off = self._generate_uniform((M, m, 1)) def act(self, x): # 1. Get new noise self.W = jax.ops.index_update(self.W, 0, x - self.A @ self.x - self.B @ self. u) self.W = np.roll(self.W, -1, axis = 0) # 2. Get gradient estimates delta_E = self.cost_fn(self.x, self.u) * np.sum(self.eps, axis = 0) # 3. Execute updates self.E -= self.lr * delta_E # 3. Ensure norm is good norm = np.linalg.norm(self.E) if norm > (1-self.delta): self.E *= (1-self.delta) / norm # 4. Get new eps (after parameter update (4) or ...?) self.eps = jax.ops.index_update(self.eps, 0, self._generate_uniform( shape = (self.M, self.m, self.n), norm = np.sqrt(1 - np.linalg.norm(self.eps[1:])**2))) self.eps = np.roll(self.eps, -1, axis = 0) # 5. Update x & t and get action self.x, self.t = x, self.t + 1 self.u = -self.K @ x + np.tensordot(self.E + self.delta * self.eps[-1], \ self.W[-self.M:], axes = ([0, 2], [0, 1])) return self.u ###Output _____no_output_____ ###Markdown Plot & repeat utils ###Code def benchmark(M, W, cost_fn = quad_cost, lr = 0.001, delta = 0.001, no_control = False, gamma = None, grw = False): global A, B, Q, R, T loss_zero = evaluate(ZeroControl(), W, cost_fn) if no_control else onp.full(T, np.nan, dtype=float) loss_h2 = evaluate(H2Control(A, B, Q, R), W, cost_fn) loss_hinf = evaluate(HinfControl(A, B, Q, R, T, gamma), W, cost_fn) if gamma else onp.full(T, np.nan, dtype=np.float32) loss_ogrw = evaluate(ExtendedH2Control(A, B, Q, R, T), W, cost_fn) if grw else onp.full(T, np.nan, dtype=np.float32) H, M = 3, M loss_gpc = evaluate(GPC(A, B, Q, R, x0, M, H, lr, cost_fn), W, cost_fn) loss_bpc = evaluate(BPC(A, B, Q, R, x0, M, H, lr, delta, cost_fn), W, cost_fn) return loss_zero, loss_h2, loss_hinf, loss_gpc, loss_bpc, loss_ogrw cummean = lambda x: np.cumsum(x)/(np.arange(T)+1) def to_dataframe(alg, loss, avg_loss): global T return pd.DataFrame(data = {'Algorithm': alg, 'Time': np.arange(T, dtype=np.float32), 'Instantaneous Cost': loss, 'Average Cost': avg_loss}) def repeat_benchmark(M, Wgen, rep, cost_fn = quad_cost, lr = 0.001, delta = 0.001, no_control = False, gamma = None, grw = False): all_data = [] for r in range(rep): loss = benchmark(M, Wgen(), cost_fn, lr, delta, no_control, gamma, grw) avg_loss = list(map(cummean, loss)) data = pd.concat(list(map(lambda x: to_dataframe(*x), list(zip(alg_name, loss, avg_loss))))) all_data.append(data) all_data = pd.concat(all_data) return all_data[all_data['Instantaneous Cost'].notnull()] def plot(title, data, scale = 'linear'): fig, axs = plt.subplots(ncols=2, figsize=(15,4)) axs[0].set_yscale(scale) sns.lineplot(x = 'Time', y = 'Instantaneous Cost', hue = 'Algorithm', data = data, ax = axs[0], ci = 'sd', palette = color_code).set_title(title) axs[1].set_yscale(scale) sns.lineplot(x = 'Time', y = 'Average Cost', hue = 'Algorithm', data = data, ax = axs[1], ci = 'sd', palette = color_code).set_title(title) ###Output _____no_output_____ ###Markdown Experiments ###Code # Sine perturbations Wgen = lambda: (np.sin(np.arange(T*m)/(2*np.pi)).reshape(T,m) @ np.ones((m, n))).reshape(T, n, 1) quad_cost = lambda x, u: np.sum(x.T @ Q @ x + u.T @ R @ u) # Time steps & Number of seeds/repetitions to test each method on! T = 1000 rep = 25 for M in [3, 6]: for lr in [0.007, 0.003, 0.001]: for delta in [0.5, 0.3, 0.1, 0.05, 0.01]: print("running M = {}, lr = {}, delta = {}".format(M, lr, delta)) data = repeat_benchmark(M, Wgen, rep=rep, cost_fn=quad_cost, lr = lr, delta = delta) plot('Sinusoidal Perturbations', data) specs = str(T) + "_" + str(M) + "_" + str(lr) + "_" + str(delta) plt.savefig("sin_quad_" + specs + ".pdf") """ # DONE! # gaussian random walk def Wgen(): W = random.normal(size = (T, n, 1), scale = 1/T**(0.5)) for i in range(1, T): W[i] = W[i] + W[i-1] return W T = 1000 for M in [3, 6, 10]: for lr in [0.007, 0.003, 0.001]: for delta in [0.05, 0.03, 0.01, 0.005, 0.001]: # gaussian random walk requires smaller deltas data = repeat_benchmark(M, Wgen, lr = lr, delta = delta) plot('Gaussian Random Walk Perturbations', data) specs = str(T) + "_" + str(M) + "_" + str(lr) + "_" + str(delta) plt.savefig("random_walk_quad_" + specs + ".pdf") """ # Defining non-quadratic hinge loss with sine noise Wgen = lambda: (np.sin(np.arange(T*m)/(2*np.pi)).reshape(T,m) @ np.ones((m, n))).reshape(T, n, 1) hinge_loss = lambda x, u: np.sum(np.abs(x)) + np.sum(np.abs(u)) T = 1000 rep = 25 for M in [3, 6, 10]: for lr in [0.007, 0.003, 0.001]: for delta in [0.5, 0.3, 0.1, 0.05, 0.01]: data = repeat_benchmark(M, Wgen, rep=rep, cost_fn=hinge_loss, lr = lr, delta = delta) plot('Sinusoidal Perturbations - Hinge Loss', data) specs = str(T) + "_" + str(M) + "_" + str(lr) + "_" + str(delta) plt.savefig("sin_hinge_" + specs + ".pdf") ###Output _____no_output_____
models/Credit_Card_analysis.ipynb
###Markdown 多个模型信用卡违约分析比较 ###Code import pandas as pd from sklearn.model_selection import learning_curve, train_test_split,GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.metrics import accuracy_score from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import AdaBoostClassifier from matplotlib import pyplot as plt import seaborn as sns # 数据加载 data = data = pd.read_csv('../data/Credit_Card/UCI_Credit_Card.csv') # 数据探索 print(data.shape) # 查看数据集大小 print(data.describe()) # 数据集概览 # 查看下一个月违约率的情况 next_month = data['default.payment.next.month'].value_counts() print(next_month) df = pd.DataFrame({'default.payment.next.month': next_month.index,'values': next_month.values}) plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签 plt.figure(figsize = (6,6)) plt.title('信用卡违约率客户\n (违约:1,守约:0)') sns.set_color_codes("pastel") sns.barplot(x = 'default.payment.next.month', y="values", data=df) locs, labels = plt.xticks() plt.show() # 特征选择,去掉ID字段、最后一个结果字段即可 data.drop(['ID'], inplace=True, axis =1) #ID这个字段没有用 target = data['default.payment.next.month'].values columns = data.columns.tolist() columns.remove('default.payment.next.month') features = data[columns].values # 30%作为测试集,其余作为训练集 train_x, test_x, train_y, test_y = train_test_split(features, target, test_size=0.30, stratify = target, random_state = 1) ###Output _____no_output_____ ###Markdown 使用 GridSearchCV 工具对模型参数进行调优 ###Code # 对具体的分类器进行GridSearchCV参数调优 def GridSearchCV_work(pipeline, train_x, train_y, test_x, test_y, param_grid, score = 'accuracy'): response = {} gridsearch = GridSearchCV(estimator = pipeline, param_grid = param_grid, scoring = score) # 寻找最优的参数 和最优的准确率分数 search = gridsearch.fit(train_x, train_y) print("GridSearch最优参数:", search.best_params_) print("GridSearch最优分数: %0.4lf" %search.best_score_) predict_y = gridsearch.predict(test_x) print("准确率 %0.4lf" %accuracy_score(test_y, predict_y)) response['predict_y'] = predict_y response['accuracy_score'] = accuracy_score(test_y,predict_y) return response # 构造各种分类器 classifiers = [ SVC(random_state = 1, kernel = 'rbf'), DecisionTreeClassifier(random_state = 1, criterion = 'gini'), RandomForestClassifier(random_state = 1, criterion = 'gini'), KNeighborsClassifier(metric = 'minkowski'), AdaBoostClassifier(random_state = 1), ] # 分类器名称 classifier_names = [ 'svc', 'decisiontreeclassifier', 'randomforestclassifier', 'kneighborsclassifier', 'AdaBoostClassifier' ] # 分类器参数 classifier_param_grid = [ {'svc__C':[1], 'svc__gamma':[0.01]}, {'decisiontreeclassifier__max_depth':[6,9,11]}, {'randomforestclassifier__n_estimators':[3,5,6]} , {'kneighborsclassifier__n_neighbors':[4,6,8]}, {'AdaBoostClassifier__n_estimators':[40,50,60]}, ] for model, model_name, model_param_grid in zip(classifiers, classifier_names, classifier_param_grid): pipeline = Pipeline([ ('scaler', StandardScaler()), (model_name, model) ]) result = GridSearchCV_work(pipeline, train_x, train_y, test_x, test_y, model_param_grid , score = 'accuracy') ###Output E:\Anaconda3\envs\py36\lib\site-packages\sklearn\model_selection\_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning)
module4-real-world-experiment-design/4.Lecture_RealWorldExperimentDesign.ipynb
###Markdown Lambda School Data Science*Unit 1, Sprint 3, Module 4*--- Lambda School Data Science Module 144 Real-world Experiment Design![Induction experiment](https://upload.wikimedia.org/wikipedia/commons/1/1c/Induction_experiment.png)[Induction experiment, Wikipedia](https://commons.wikimedia.org/wiki/File:Induction_experiment.png) Prepare - Learn about JavaScript and Google Analytics Python is great - but with web applications, it's impossible to avoid JavaScript. The lingua franca of the web, JavaScript runs in all browsers, and thus all front-end code must either be JS or transpiled to it. As a data scientist you don't have to learn JavaScript - but you do have to be aware of it, and being able to figure out snippets of it is an invaluable skill to connect your skills with real-world applications.So, we leave the warm comfort of Python, and venture to a bigger world - check out the [LambdaSchool/AB-Demo repo](https://github.com/LambdaSchool/AB-Demo) and [live experiment](https://lambdaschool.github.io/AB-Demo/) before class.Additionally, sign up for [Google Analytics](https://www.google.com/analytics) - if you're not sure on the steps or what "property" to give it, you can put a placeholder or wait until the live lecture. Google also has [Analytics documentation](https://support.google.com/analytics/) that is worth a look.Note - if you use any of the various tracker blocking techniques, it's quite likely you won't show up in Google Analytics. You'll have to disable them to be able to fully test your experiment. Live Lecture - Using Google Analytics with a live A/B test Again we won't do much Python here, but we'll put a few notes and results in the notebook as we go. Assignment - Set up your own A/B test! For a baseline, a straight fork of the Lambda School repo is OK. Getting that working with your own Analytics profile is already a task. But if you get through that, stretch goals:1. Explore Google Analytics - it's big and changes frequently, but powerful (can track conversions and events, flows, etc.)2. Customize the experiment to be more interesting/different (try colors!)3. Check out the various tools for setting up A/B experiments (e.g. [Optimizely](https://www.optimizely.com/) and [alternatives](https://alternativeto.net/software/optimizely/))4. Try to get enough traffic to actually have more real data (don't spam people, but do share with friends)5. If you do get more traffic, don't just apply a t-test - dig into the results and use both math and writing to describe your findingsAdditionally, today it is a good idea to go back and review the frequentist hypothesis testing material from the first two modules. And if you feel on top of things - you can use your newfound GitHub Pages and Google Analytics skills to build/iterate a portfolio page, and maybe even instrument it with Analytics! Lecture Notes: ###Code # import pandas library. import pandas as pd # read in the data set. df = pd.read_csv('Kevin_Hillstrom_MineThatData_E-MailAnalytics_DataMiningChallenge_2008.03.20.csv') # show the data frame shape. print(df.shape) # show the data set with headers. df.head() # check the data for NaN's. df.isna().sum() # were the tests 'segment' evenly split? df.segment.value_counts() # check the overall visit 'mean'. df.visit.mean() # use 'groupby' with 'segment' & 'visit.mean' to see the each 'test' and the visit rate. df.groupby('segment').visit.mean() # use 'groupby' with 'segment' & 'conversion.mean' to see the each 'test' and the conversion rate, use *100 to show %. df.groupby('segment').conversion.mean()*100 # use 'groupby' with 'segment' & 'spend.mean' to see the each 'test' and the spend rate. df.groupby('segment').spend.mean() # use crosstab to put 'segment', 'visists', 'conversion' and 'spend' together. pd.crosstab(df['segment'], [df['visit'], df['conversion'], df['spend']]) # import numpy for numbers work. import numpy as np # create a pivot table for 'segment', 'visit', 'conversion', 'spend', using the 'mean' for each. pd.pivot_table(df,index=["segment"],values=["visit","conversion","spend"], aggfunc=[np.mean]) # create a line plot for the pivot table. pd.pivot_table(df,index=["segment"],values=["visit","conversion","spend"], aggfunc=[np.mean]).plot() ###Output _____no_output_____
深度学习/d2l-zh-1.1/1_chapter_prerequisite/.ipynb_checkpoints/autograd-checkpoint.ipynb
###Markdown 自动求梯度在深度学习中,我们经常需要对函数求梯度(gradient)。本节将介绍如何使用MXNet提供的`autograd`模块来自动求梯度。如果对本节中的数学概念(如梯度)不是很熟悉,可以参阅附录中[“数学基础”](../chapter_appendix/math.ipynb)一节。 ###Code from mxnet import autograd, nd ###Output _____no_output_____ ###Markdown 简单例子我们先看一个简单例子:对函数 $y = 2\boldsymbol{x}^{\top}\boldsymbol{x}$ 求关于列向量 $\boldsymbol{x}$ 的梯度。我们先创建变量`x`,并赋初值。 ###Code x = nd.arange(4).reshape((4, 1)) x ###Output _____no_output_____ ###Markdown 为了求有关变量`x`的梯度,我们需要先调用`attach_grad`函数来申请存储梯度所需要的内存。 ###Code x.attach_grad() ###Output _____no_output_____ ###Markdown 下面定义有关变量`x`的函数。为了减少计算和内存开销,默认条件下MXNet不会记录用于求梯度的计算。我们需要调用`record`函数来要求MXNet记录与求梯度有关的计算。 ###Code with autograd.record(): y = 2 * nd.dot(x.T, x) ###Output _____no_output_____ ###Markdown 由于`x`的形状为(4, 1),`y`是一个标量。接下来我们可以通过调用`backward`函数自动求梯度。需要注意的是,如果`y`不是一个标量,MXNet将默认先对`y`中元素求和得到新的变量,再求该变量有关`x`的梯度。 ###Code y.backward() ###Output _____no_output_____ ###Markdown 函数 $y = 2\boldsymbol{x}^{\top}\boldsymbol{x}$ 关于$\boldsymbol{x}$ 的梯度应为$4\boldsymbol{x}$。现在我们来验证一下求出来的梯度是正确的。 ###Code assert (x.grad - 4 * x).norm().asscalar() == 0 x.grad ###Output _____no_output_____ ###Markdown 训练模式和预测模式从上面可以看出,在调用`record`函数后,MXNet会记录并计算梯度。此外,默认情况下`autograd`还会将运行模式从预测模式转为训练模式。这可以通过调用`is_training`函数来查看。 ###Code print(autograd.is_training()) with autograd.record(): print(autograd.is_training()) ###Output False True ###Markdown 在有些情况下,同一个模型在训练模式和预测模式下的行为并不相同。我们会在后面的章节(如[“丢弃法”](../chapter_deep-learning-basics/dropout.ipynb)一节)详细介绍这些区别。 对Python控制流求梯度使用MXNet的一个便利之处是,即使函数的计算图包含了Python的控制流(如条件和循环控制),我们也有可能对变量求梯度。考虑下面程序,其中包含Python的条件和循环控制。需要强调的是,这里循环(`while`循环)迭代的次数和条件判断(`if`语句)的执行都取决于输入`a`的值。 ###Code def f(a): b = a * 2 while b.norm().asscalar() < 1000: b = b * 2 if b.sum().asscalar() > 0: c = b else: c = 100 * b return c ###Output _____no_output_____ ###Markdown 我们像之前一样使用`record`函数记录计算,并调用`backward`函数求梯度。 ###Code a = nd.random.normal(shape=1) a.attach_grad() with autograd.record(): c = f(a) c.backward() ###Output _____no_output_____ ###Markdown 我们来分析一下上面定义的`f`函数。事实上,给定任意输入`a`,其输出必然是 `f(a) = x * a`的形式,其中标量系数`x`的值取决于输入`a`。由于`c = f(a)`有关`a`的梯度为`x`,且值为`c / a`,我们可以像下面这样验证对本例中控制流求梯度的结果的正确性。 ###Code a.grad == c / a ###Output _____no_output_____
site/en/r2/tutorials/distribute/multi_worker.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Multi-worker Training in TensorFlow View on TensorFlow.org Run in Google Colab View source on GitHub OverviewThis tutorial demonstrates how `tf.distribute.Strategy` can be used for distributed multi-worker training with `tf.estimator`. If you write your code using `tf.estimator`, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.Before getting started, please read the [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb). The [multi-GPU training tutorial](./keras.ipynb) is also relevant, because this tutorial uses the same model. SetupFirst, setup TensorFlow and the necessary imports. ###Code from __future__ import absolute_import, division, print_function !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os, json ###Output _____no_output_____ ###Markdown Input FunctionThis tutorial uses the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The code here is similar to the [multi-GPU training tutorial](./keras.ipynb) with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/`num_workers` distinct portions of the dataset. ###Code BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.apply(tf.data.experimental.filter_for_shard( input_context.num_input_pipelines, input_context.input_pipeline_id)) return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker. Multi-worker ConfigurationOne of the key differences in this tutorial, compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the entire cluster, namely the workers and parameter servers in the cluster. `task` provides information of the current task. In this example, the task `type` is `worker` and the task `index` is `0`. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on an external IP address and port, and set `TF_CONFIG` on each worker appropriately, i.e. modify the task `index`.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```NUM_WORKERS = 1IP_ADDRS = ['localhost']PORTS = [12345]os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ['%s:%d' % (IP_ADDRS[w], PORTS[w]) for w in range(NUM_WORKERS)] }, 'task': {'type': 'worker', 'index': 0}})``` Define the modelWrite the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the [multi-GPU training tutorial](./keras.ipynb). ###Code LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True)(labels, logits) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step())) ###Output _____no_output_____ ###Markdown Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. MultiWorkerMirroredStrategyTo train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../guide/distribute_strategy.ipynb) has more details about this strategy. ###Code strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ###Output _____no_output_____ ###Markdown `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train and evaluate the modelNext, specify the distribution strategy in the `RunConfig` for the estimator, and train and evaluate by invoking `tf.estimator.train_and_evaluate`. This tutorial distributes only the training by specifying the strategy via `train_distribute`. It is also possible to distribute the evaluation via `eval_distribute`. ###Code config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) ) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Multi-worker Training in TensorFlow View on TensorFlow.org Run in Google Colab View source on GitHub OverviewThis tutorial demonstrates how `tf.distribute.Strategy` can be used for distributed multi-worker training with `tf.estimator`. If you write your code using `tf.estimator`, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.Before getting started, please read the [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb). The [multi-GPU training tutorial](./keras.ipynb) is also relevant, because this tutorial uses the same model. SetupFirst, setup TensorFlow and the necessary imports. ###Code from __future__ import absolute_import, division, print_function, unicode_literals !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os, json ###Output _____no_output_____ ###Markdown Input FunctionThis tutorial uses the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The code here is similar to the [multi-GPU training tutorial](./keras.ipynb) with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/`num_workers` distinct portions of the dataset. ###Code BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker. Multi-worker ConfigurationOne of the key differences in this tutorial, compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the entire cluster, namely the workers and parameter servers in the cluster. `task` provides information of the current task. In this example, the task `type` is `worker` and the task `index` is `0`. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on an external IP address and port, and set `TF_CONFIG` on each worker appropriately, i.e. modify the task `index`.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```NUM_WORKERS = 1IP_ADDRS = ['localhost']PORTS = [12345]os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ['%s:%d' % (IP_ADDRS[w], PORTS[w]) for w in range(NUM_WORKERS)] }, 'task': {'type': 'worker', 'index': 0}})``` Define the modelWrite the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the [multi-GPU training tutorial](./keras.ipynb). ###Code LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True)(labels, logits) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step())) ###Output _____no_output_____ ###Markdown Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. MultiWorkerMirroredStrategyTo train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy. ###Code strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ###Output _____no_output_____ ###Markdown `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train and evaluate the modelNext, specify the distribution strategy in the `RunConfig` for the estimator, and train and evaluate by invoking `tf.estimator.train_and_evaluate`. This tutorial distributes only the training by specifying the strategy via `train_distribute`. It is also possible to distribute the evaluation via `eval_distribute`. ###Code config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) ) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Multi-worker Training in TensorFlow View on TensorFlow.org Run in Google Colab View source on GitHub OverviewThis tutorial demonstrates how `tf.distribute.Strategy` can be used for distributed multi-worker training with `tf.estimator`. If you write your code using `tf.estimator`, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.Before getting started, please read the [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb). The [multi-GPU training tutorial](./keras.ipynb) is also relevant, because this tutorial uses the same model. SetupFirst, setup TensorFlow and the necessary imports. ###Code from __future__ import absolute_import, division, print_function !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os, json ###Output _____no_output_____ ###Markdown Input FunctionThis tutorial uses the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The code here is similar to the [multi-GPU training tutorial](./keras.ipynb) with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/`num_workers` distinct portions of the dataset. ###Code BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker. Multi-worker ConfigurationOne of the key differences in this tutorial, compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the entire cluster, namely the workers and parameter servers in the cluster. `task` provides information of the current task. In this example, the task `type` is `worker` and the task `index` is `0`. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on an external IP address and port, and set `TF_CONFIG` on each worker appropriately, i.e. modify the task `index`.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```NUM_WORKERS = 1IP_ADDRS = ['localhost']PORTS = [12345]os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ['%s:%d' % (IP_ADDRS[w], PORTS[w]) for w in range(NUM_WORKERS)] }, 'task': {'type': 'worker', 'index': 0}})``` Define the modelWrite the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the [multi-GPU training tutorial](./keras.ipynb). ###Code LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True)(labels, logits) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step())) ###Output _____no_output_____ ###Markdown Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. MultiWorkerMirroredStrategyTo train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy. ###Code strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ###Output _____no_output_____ ###Markdown `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train and evaluate the modelNext, specify the distribution strategy in the `RunConfig` for the estimator, and train and evaluate by invoking `tf.estimator.train_and_evaluate`. This tutorial distributes only the training by specifying the strategy via `train_distribute`. It is also possible to distribute the evaluation via `eval_distribute`. ###Code config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) ) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Multi-worker Training in TensorFlow View on TensorFlow.org Run in Google Colab View source on GitHub OverviewThis tutorial demonstrates how `tf.distribute.Strategy` can be used for distributed multi-worker training with `tf.estimator`. If you write your code using `tf.estimator`, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.Before getting started, please read the [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb). The [multi-GPU training tutorial](./keras.ipynb) is also relevant, because this tutorial uses the same model. SetupFirst, setup TensorFlow and the necessary imports. ###Code from __future__ import absolute_import, division, print_function, unicode_literals !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os, json ###Output _____no_output_____ ###Markdown Input FunctionThis tutorial uses the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The code here is similar to the [multi-GPU training tutorial](./keras.ipynb) with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/`num_workers` distinct portions of the dataset. ###Code BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker. Multi-worker ConfigurationOne of the key differences in this tutorial, compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the entire cluster, namely the workers and parameter servers in the cluster. `task` provides information of the current task. In this example, the task `type` is `worker` and the task `index` is `0`.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on an external IP address and port, and set `TF_CONFIG` on each worker appropriately, i.e. modify the task `index`.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```NUM_WORKERS = 1IP_ADDRS = ['localhost']PORTS = [12345]os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ['%s:%d' % (IP_ADDRS[w], PORTS[w]) for w in range(NUM_WORKERS)] }, 'task': {'type': 'worker', 'index': 0}})``` Define the modelWrite the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the [multi-GPU training tutorial](./keras.ipynb). ###Code LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True)(labels, logits) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step())) ###Output _____no_output_____ ###Markdown Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. MultiWorkerMirroredStrategyTo train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy. ###Code strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ###Output _____no_output_____ ###Markdown `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train and evaluate the modelNext, specify the distribution strategy in the `RunConfig` for the estimator, and train and evaluate by invoking `tf.estimator.train_and_evaluate`. This tutorial distributes only the training by specifying the strategy via `train_distribute`. It is also possible to distribute the evaluation via `eval_distribute`. ###Code config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) ) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Multi-worker Training in TensorFlow View on TensorFlow.org Run in Google Colab View source on GitHub OverviewThis tutorial demonstrates how `tf.distribute.Strategy` can be used for distributed multi-worker training with `tf.estimator`. If you write your code using `tf.estimator`, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.Before getting started, please read the [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb). The [multi-GPU training tutorial](./keras.ipynb) is also relevant, because this tutorial uses the same model. SetupFirst, setup TensorFlow and the necessary imports. ###Code from __future__ import absolute_import, division, print_function, unicode_literals !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow_datasets as tfds import tensorflow as tf import os, json ###Output _____no_output_____ ###Markdown Input FunctionThis tutorial uses the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The code here is similar to the [multi-GPU training tutorial](./keras.ipynb) with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/`num_workers` distinct portions of the dataset. ###Code BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker. Multi-worker ConfigurationOne of the key differences in this tutorial, compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the entire cluster, namely the workers and parameter servers in the cluster. `task` provides information of the current task. In this example, the task `type` is `worker` and the task `index` is `0`. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on an external IP address and port, and set `TF_CONFIG` on each worker appropriately, i.e. modify the task `index`.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```NUM_WORKERS = 1IP_ADDRS = ['localhost']PORTS = [12345]os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ['%s:%d' % (IP_ADDRS[w], PORTS[w]) for w in range(NUM_WORKERS)] }, 'task': {'type': 'worker', 'index': 0}})``` Define the modelWrite the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the [multi-GPU training tutorial](./keras.ipynb). ###Code LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True)(labels, logits) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step())) ###Output _____no_output_____ ###Markdown Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. MultiWorkerMirroredStrategyTo train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy. ###Code strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ###Output _____no_output_____ ###Markdown `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train and evaluate the modelNext, specify the distribution strategy in the `RunConfig` for the estimator, and train and evaluate by invoking `tf.estimator.train_and_evaluate`. This tutorial distributes only the training by specifying the strategy via `train_distribute`. It is also possible to distribute the evaluation via `eval_distribute`. ###Code config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) ) ###Output _____no_output_____
13-Modules-and-Packages.ipynb
###Markdown *This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output _____no_output_____ ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages which offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. 1. Explicit module importExplicit import of a module preserves the modules content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the sine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown 2. Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the ``numpy`` (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown 3. Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown 4. Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function which can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown Modules and Packages 模块和包 > One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules.Python能够胜任大范围的任务的一大原因是它”自带电池“ - 也就是说,Python的标准库包含着很多有用的工具,能够适用与多种范围任务场合。在这之上,Python还有一个很广泛的生态,很多第三方的工具和包提供着许多特殊的功能。本章我们来讨论一下引入标准库,安装第三方模块的工具和如何创建你自己的模块。 Loading Modules: the ``import`` Statement 载入模块:`import` 语句> For loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended.要载入标准库或第三方的模块,Python提供了`import`语句。使用`import`有多种方法,我们会在这里简要介绍,从最推荐的方式到最不推荐的方式。 Explicit module import 明确的载入模块> Explicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi:明确的模块载入会将模块的内容保留在模块的命名空间中。我们可以使用`.`符号在模块命名空间与模块内容之间进行引用。例如,下面我们会载入标准库的`math`模块然后计算pi的余弦值: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by alias 明确的载入模块并使用别名> For longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``:对于模块名称较长的情况,每次使用全名不是特别方便,因此,我们可以使用`import ... as ...`语法导入模块并给模块起一个短的别名。例如,NumPy(Numerical Python)包是一个数据科学领域特别受欢迎的包,我们惯例会载入它并起一个别名`np`: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contents 明确的载入模块的内容> Sometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module:有时我们希望载入模块当中一些特定的内容,而不是载入模块的命名空间。使用`from ... import ...`可以满足这个要求。例如,我们可以从`math`模块中载入`cos`函数和`pi`常量: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contents 隐性的载入模块的内容> Finally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern:最后,我们有时需要将整个模块的内容载入到本地命名空间。使用`from ... import *`可以做到这一点: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown > This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.这种方法应该谨慎使用。因为这种载入方式可能会意外地覆盖了本地命名空间的同名内容,而且这样隐性的载入方式也会导致查错很困难。> For example, Python has a built-in ``sum`` function that can be used for various operations:例如,Python有一个内建函数`sum`可以用于计算一个迭代的求和: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(iterable, start=0, /) Return the sum of a 'start' value (default: 0) plus an iterable of numbers When the iterable is empty, return the start value. This function is intended specifically for use with numeric values and may reject non-numeric types. ###Markdown > We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``):我们可以用`sum`来计算一个序列的和,还可以指定一个开始值(这里我们使用-1作为开始值): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown > Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``:现在我们试一下将`numpy`的所有内容载入到本地命名空间: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown *This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* モジュールとパッケージ Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules.幅広いタスクに役立つPythonの1つの機能は、「バッテリーが含まれる」ということです。つまり、Python標準ライブラリには、幅広いタスクに役立つツールが含まれています。さらに、より専門的な機能を提供するサードパーティのツールとパッケージの幅広いエコシステムがあります。ここでは、標準ライブラリモジュールのインポート、サードパーティのモジュールをインストールするためのツール、独自のモジュールを作成する方法について説明します。 モジュールの読み込み: `` import``ステートメントビルトインおよびサードパーティのモジュールをロードするために、Pythonは `` import``ステートメントを提供します。ステートメントを使用する方法はいくつかありますが、ここでは簡単に説明しますが、推奨される方法から推奨されない方法があります。 Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. 明示的なモジュールのインポートモジュールを明示的にインポートすると、モジュールのコンテンツが名前空間に保持されます。次に、名前空間を使用して、"``.``" を間に挟んでコンテンツを参照します。たとえば、ここでは組み込みの``math``モジュールをインポートして、piの余弦を計算します。 Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown エイリアスによる明示的なモジュールのインポートより長いモジュール名の場合、要素にアクセスするたびに完全なモジュール名を使用するのは便利ではありません。このため、一般的には、「 `` import ... as ... ``」パターンを使用して、名前空間の短いエイリアスを作成します。たとえば、データサイエンスに役立つ人気のあるサードパーティのパッケージであるNumPy(Numerical Python)パッケージは、慣習的にエイリアス `` np``としてインポートされます。 Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown モジュールコンテンツの明示的なインポートモジュールの名前空間をインポートするのではなく、いくつかの特定のアイテムをモジュールからインポートしたい場合があります。これは、「from ... import ...」パターンで実行できます。たとえば、「math」モジュールから「cos」関数と「pi」定数だけをインポートできます: Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown モジュールコンテンツの暗黙的なインポート最後に、モジュールのコンテンツ全体をローカルの名前空間にインポートすると便利な場合があります。これは、「from ... import *」パターンで実行できます。 Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown このパターンは、使用する場合は控えめに使用する必要があります。問題は、そのようなインポートにより、上書きするつもりのない関数名が上書きされることがあり、ステートメントの暗黙性により、何が変更されたかを判別することが困難になることです。たとえば、Pythonにはさまざまな操作に使用できる組み込みの `` sum``関数があります:This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown これを使用して、特定の値で始まるシーケンスの合計を計算できます(ここでは、「-1」で始まります):We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code a = range(5) print(*a) b = range(0,5) print(*b) b = range(5) j = 0 for i in b: j = j + i j c = range(-1,5) print(*c) c = range(-1,5) j = 0 for i in c: j = j + i j sum(range(5), -1) ###Output _____no_output_____ ###Markdown *まったく同じ関数呼び出し*を実行するとどうなるかを見てみましょう---`` numpy``から `` * ``をインポートした後:Now observe what happens if we make the *exact same function call* ---after importing ``*`` from ``numpy``: ###Code from numpy import * a = range(5) print(*a) sum(range(5), -1)  ###Output _____no_output_____ ###Markdown 結果は1つずれています!この理由は、 `` import * ``ステートメントが組み込みの `` sum``関数を `` numpy.sum``関数に置き換える*ということです。これは、異なる呼び出しシグネチャを持っています:前者では、 '「-1」から始まる「範囲(5)」を合計しています; 後者では、最後の軸(「-1」で示される)に沿って `` range(5) ``を合計しています。これは、「インポート*」を使用するときに注意が払われない場合に発生する可能性のあるタイプの状況です。このため、何をしているのか正確に理解していない限り、これを回避するのが最善です。The result is off by one!The reason for this is that the ``import *`` statement *replaces* the built-in ``sum`` function with the ``numpy.sum`` function, which has a different call signature: in the former, we're summing ``range(5)`` starting at ``-1``; in the latter, we're summing ``range(5)`` along the last axis (indicated by ``-1``).This is the type of situation that may arise if care is not taken when using "``import *``" – for this reason, it is best to avoid this unless you know exactly what you are doing. なぜ、Numpy.sum関数は、10になるのかわかった。最後の軸(-1)という意味がわからなかったが、これは、方向と考える。合計する値ではなく。https://deepage.net/features/numpy-sum.htmlnumpysum ###Code import numpy as np help(np.sum) ###Output Help on function sum in module numpy: sum(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>) Sum of array elements over a given axis. Parameters ---------- a : array_like Elements to sum. axis : None or int or tuple of ints, optional Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. .. versionadded:: 1.7.0 If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. dtype : dtype, optional The type of the returned array and of the accumulator in which the elements are summed. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `sum` method of sub-classes of `ndarray`, however any non-default value will be. If the sub-class' method does not implement `keepdims` any exceptions will be raised. initial : scalar, optional Starting value for the sum. See `~numpy.ufunc.reduce` for details. .. versionadded:: 1.15.0 where : array_like of bool, optional Elements to include in the sum. See `~numpy.ufunc.reduce` for details. .. versionadded:: 1.17.0 Returns ------- sum_along_axis : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. See Also -------- ndarray.sum : Equivalent method. add.reduce : Equivalent functionality of `add`. cumsum : Cumulative sum of array elements. trapz : Integration of array values using the composite trapezoidal rule. mean, average Notes ----- Arithmetic is modular when using integer types, and no error is raised on overflow. The sum of an empty array is the neutral element 0: >>> np.sum([]) 0.0 For floating point numbers the numerical precision of sum (and ``np.add.reduce``) is in general limited by directly adding each number individually to the result causing rounding errors in every step. However, often numpy will use a numerically better approach (partial pairwise summation) leading to improved precision in many use-cases. This improved precision is always provided when no ``axis`` is given. When ``axis`` is given, it will depend on which axis is summed. Technically, to provide the best speed possible, the improved precision is only used when the summation is along the fast axis in memory. Note that the exact precision may vary depending on other parameters. In contrast to NumPy, Python's ``math.fsum`` function uses a slower but more precise approach to summation. Especially when summing a large number of lower precision floating point numbers, such as ``float32``, numerical errors can become significant. In such cases it can be advisable to use `dtype="float64"` to use a higher precision for the output. Examples -------- >>> np.sum([0.5, 1.5]) 2.0 >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) 1 >>> np.sum([[0, 1], [0, 5]]) 6 >>> np.sum([[0, 1], [0, 5]], axis=0) array([0, 6]) >>> np.sum([[0, 1], [0, 5]], axis=1) array([1, 5]) >>> np.sum([[0, 1], [np.nan, 5]], where=[False, True], axis=1) array([1., 5.]) If the accumulator is too small, overflow occurs: >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) -128 You can also start the sum with a value other than zero: >>> np.sum([10], initial=5) 15 ###Markdown *This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on function sum in module numpy: sum(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>) Sum of array elements over a given axis. Parameters ---------- a : array_like Elements to sum. axis : None or int or tuple of ints, optional Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. .. versionadded:: 1.7.0 If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. dtype : dtype, optional The type of the returned array and of the accumulator in which the elements are summed. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `sum` method of sub-classes of `ndarray`, however any non-default value will be. If the sub-class' method does not implement `keepdims` any exceptions will be raised. initial : scalar, optional Starting value for the sum. See `~numpy.ufunc.reduce` for details. .. versionadded:: 1.15.0 where : array_like of bool, optional Elements to include in the sum. See `~numpy.ufunc.reduce` for details. .. versionadded:: 1.17.0 Returns ------- sum_along_axis : ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. See Also -------- ndarray.sum : Equivalent method. add.reduce : Equivalent functionality of `add`. cumsum : Cumulative sum of array elements. trapz : Integration of array values using the composite trapezoidal rule. mean, average Notes ----- Arithmetic is modular when using integer types, and no error is raised on overflow. The sum of an empty array is the neutral element 0: >>> np.sum([]) 0.0 For floating point numbers the numerical precision of sum (and ``np.add.reduce``) is in general limited by directly adding each number individually to the result causing rounding errors in every step. However, often numpy will use a numerically better approach (partial pairwise summation) leading to improved precision in many use-cases. This improved precision is always provided when no ``axis`` is given. When ``axis`` is given, it will depend on which axis is summed. Technically, to provide the best speed possible, the improved precision is only used when the summation is along the fast axis in memory. Note that the exact precision may vary depending on other parameters. In contrast to NumPy, Python's ``math.fsum`` function uses a slower but more precise approach to summation. Especially when summing a large number of lower precision floating point numbers, such as ``float32``, numerical errors can become significant. In such cases it can be advisable to use `dtype="float64"` to use a higher precision for the output. Examples -------- >>> np.sum([0.5, 1.5]) 2.0 >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) 1 >>> np.sum([[0, 1], [0, 5]]) 6 >>> np.sum([[0, 1], [0, 5]], axis=0) array([0, 6]) >>> np.sum([[0, 1], [0, 5]], axis=1) array([1, 5]) >>> np.sum([[0, 1], [np.nan, 5]], where=[False, True], axis=1) array([1., 5.]) If the accumulator is too small, overflow occurs: >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) -128 You can also start the sum with a value other than zero: >>> np.sum([10], initial=5) 15 ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown *This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown *This notebook comes from [A Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas (OReilly Media, 2016). This content is licensed [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE). The full notebook listing is available at https://github.com/jakevdp/WhirlwindTourOfPython.* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown *Este notebook es una adaptación realizada por J. Rafael Rodríguez Galván del material "[Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp)" de Jake VanderPlas; tanto el [contenido original](https://github.com/jakevdp/WhirlwindTourOfPython) como la [adpatación actual](https://github.com/rrgalvan/PythonIntroMasterMatemat)] están disponibles en Github.**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____ ###Markdown Explicit import of module contentsSometimes rather than importing the module namespace, you would just like to import a few particular items from the module.This can be done with the "``from ... import ...``" pattern.For example, we can import just the ``cos`` function and the ``pi`` constant from the ``math`` module: ###Code from math import cos, pi cos(pi) ###Output _____no_output_____ ###Markdown Implicit import of module contentsFinally, it is sometimes useful to import the entirety of the module contents into the local namespace.This can be done with the "``from ... import *``" pattern: ###Code from math import * sin(pi) ** 2 + cos(pi) ** 2 ###Output _____no_output_____ ###Markdown This pattern should be used sparingly, if at all.The problem is that such imports can sometimes overwrite function names that you do not intend to overwrite, and the implicitness of the statement makes it difficult to determine what has changed.For example, Python has a built-in ``sum`` function that can be used for various operations: ###Code help(sum) ###Output Help on built-in function sum in module builtins: sum(...) sum(iterable[, start]) -> value Return the sum of an iterable of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the iterable is empty, return start. ###Markdown We can use this to compute the sum of a sequence, starting with a certain value (here, we'll start with ``-1``): ###Code sum(range(5), -1) ###Output _____no_output_____ ###Markdown Now observe what happens if we make the *exact same function call* after importing ``*`` from ``numpy``: ###Code from numpy import * sum(range(5), -1) ###Output _____no_output_____ ###Markdown *This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* Modules and Packages One feature of Python that makes it useful for a wide range of tasks is the fact that it comes "batteries included" – that is, the Python standard library contains useful tools for a wide range of tasks.On top of this, there is a broad ecosystem of third-party tools and packages that offer more specialized functionality.Here we'll take a look at importing standard library modules, tools for installing third-party modules, and a description of how you can make your own modules. Loading Modules: the ``import`` StatementFor loading built-in and third-party modules, Python provides the ``import`` statement.There are a few ways to use the statement, which we will mention briefly here, from most recommended to least recommended. Explicit module importExplicit import of a module preserves the module's content in a namespace.The namespace is then used to refer to its contents with a "``.``" between them.For example, here we'll import the built-in ``math`` module and compute the cosine of pi: ###Code import math math.cos(math.pi) ###Output _____no_output_____ ###Markdown Explicit module import by aliasFor longer module names, it's not convenient to use the full module name each time you access some element.For this reason, we'll commonly use the "``import ... as ...``" pattern to create a shorter alias for the namespace.For example, the NumPy (Numerical Python) package, a popular third-party package useful for data science, is by convention imported under the alias ``np``: ###Code import numpy as np np.cos(np.pi) ###Output _____no_output_____
Notion/Notion_Get_page.ipynb
###Markdown Notion - Get page Input Import library ###Code import requests import pandas as pd import json from pprint import pprint ###Output _____no_output_____ ###Markdown Variables ###Code # Enter Notion token API TOKEN_API = 'YOUR_TOKEN_API' # Enter page url PAGE_URL = 'YOUR_PAGE_URL' # Notion version _VERSION = '2021-08-16' ###Output _____no_output_____ ###Markdown Model ###Code def create_headers(token_api, version): return { 'Authorization': f'Bearer {token_api}', 'Notion-Version': f'{version}', } create_headers(TOKEN_API, _VERSION) def get_id_from_url(database_url): return database_url.split('-')[-1] get_id_from_url(PAGE_URL) ###Output _____no_output_____ ###Markdown Get properties ###Code # make a request to Notion API and receive a Python dictionary def fetch_raw_properties(token_api, page_url): page_id = get_id_from_url(page_url) url = f'https://api.notion.com/v1/pages/{page_id}' headers = create_headers(token_api, _VERSION) res = requests.get(url, headers=headers) try: res.raise_for_status() except requests.HTTPError as e: return e return res.json() page = fetch_raw_properties(TOKEN_API, PAGE_URL) pprint(page) def extract_text(dictionnary): if 'name' in dictionnary: return dictionnary['name'] elif 'plain_text' in dictionnary: return dictionnary['plain_text'] else: return '' def extract_date(dictionnary): ''' For the moment we extract only the starting date of a date field Example {'id': 'prop_1', 'type': 'date', 'date': {'start': '2018-03-21', 'end': None}} ''' return dictionnary['start'] def extract_data(element): ''' input: a dictionnary of a notion property Exemple: {'id': 'W#4k', 'type': 'select', 'select': {'id': 'b305bd26-****-****-****-c78e2034db8f', 'name': 'Client', 'color': 'green'}} output: the string containing the information of the dict. (Client in the exemple) ''' if type(element) is dict: dict_type = element['type'] informations = element[dict_type] if type(informations) is dict: if dict_type == 'date': return extract_date(informations) else: return extract_text(informations) elif type(informations) is list: informations = [extract_text(elm) for elm in informations] return ','.join(informations) else: return informations else: return '' def extract_properties(dictionary): return {key: extract_data(elm) for key,elm in dictionary['properties'].items()} extract_properties(page) def clean_meta_data(dictionary): meta_data = dictionary.copy() meta_data['PARENT_TYPE'] = meta_data['parent']['type'] meta_data['PARENT_ID'] = meta_data['parent'][meta_data['PARENT_TYPE']] useless_meta = ['url', 'object', 'parent', 'properties'] [meta_data.pop(useless) for useless in useless_meta] return meta_data clean_meta_data(page) def convert_keys_to_upper(dictionary): return {key.upper(): value for key,value in dictionary.items()} def get_page_properties(token_api, page_url): raw_data = fetch_raw_properties(token_api, page_url) properties = extract_properties(raw_data) meta_data = clean_meta_data(raw_data) properties.update(meta_data) properties = convert_keys_to_upper(properties) return pd.DataFrame([properties]) get_page_properties(TOKEN_API, PAGE_URL) ###Output _____no_output_____ ###Markdown Get content👉 The content of a page is return as a array of blocks by the Notion API ```json{ "object": "block", "id": "9bc30ad4-9373-46a5-84ab-0a7845ee52e6", "created_time": "2021-03-16T16:31:00.000Z", "last_edited_time": "2021-03-16T16:32:00.000Z", "has_children": false, "type": "to_do", "to_do": { "text": [ { "type": "text", "text": { "content": "Lacinato kale", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Lacinato kale", "href": null } ], "checked": false }}```Each block is a dictionary with different keys:- id *(str)*- has_children *(bool)*- created_time *(str)*- last_edited_time *(str)*- type *(str)*- {type} *(dict)*{type} is an object with type-specific block informationList of block type:- paragraph- heading(1,2,3)- bullet list item- numbered list item- to_do_blocks- toggle block- child page blockMore info here: https://developers.notion.com/reference/block 🚨 BEAWARE OF:- I can't retreive the children element of a block: it's not the same behaviour than the one in Block Object look like it's a bug from the API - Some data information are lost. Exemple: the color of the text and the link - blank blocks are count as a paragraph we maybe need to create a new category for them or delete them from the result ###Code def fetch_raw_blocks(token_api, page_url): page_id = get_id_from_url(page_url) url = f'https://api.notion.com/v1/blocks/{page_id}/children' headers = create_headers(token_api, _VERSION) response = requests.get(url, headers=headers) res = requests.get(url, headers=headers) try: res.raise_for_status() except requests.HTTPError as e: return e return res.json()['results'] blocks = fetch_raw_blocks(TOKEN_API, PAGE_URL) pprint(blocks[0]) def extract_text_from_rich_text(rich_text): return rich_text['plain_text'] def extract_text_from_array_of_rich_text(array): content = [extract_text_from_rich_text(rich_text) for rich_text in array] return ' '.join(content) def extract_block_content(block): block_type = block['type'] if block_type.startswith('heading'): array_of_rich_text = block[block_type]['text'] return extract_text_from_array_of_rich_text(array_of_rich_text) elif block_type == 'paragraph': array_of_rich_text = block[block_type]['text'] return extract_text_from_array_of_rich_text(array_of_rich_text) elif block_type.endswith('list_item'): array_of_rich_text = block[block_type]['text'] return extract_text_from_array_of_rich_text(array_of_rich_text) elif block_type == 'to_do': array_of_rich_text = block[block_type]['text'] return extract_text_from_array_of_rich_text(array_of_rich_text) elif block_type == 'toggle': array_of_rich_text = block[block_type]['text'] return extract_text_from_array_of_rich_text(array_of_rich_text) elif block_type == 'child_page': return block[block_type]['title'] first_block = blocks[0] extract_block_content(first_block) def get_page_content(TOKEN_API, PAGE_URL): blocks = fetch_raw_blocks(TOKEN_API, PAGE_URL) page_content = [] for block in blocks: block['content'] = extract_block_content(block) block.pop( block['type']) block.pop('object') block = convert_keys_to_upper(block) page_content.append(block) return pd.DataFrame(page_content) get_page_content(TOKEN_API, PAGE_URL) ###Output _____no_output_____ ###Markdown --- Output 1. Get properties : Table format- PROPERTIES (Majuscule + unstacked)- ID - PARENT_TYPE- PARENT_ID- CREATED_TIME- LAST_EDITED_TIME- ARCHIVED ###Code get_page_properties(TOKEN_API, PAGE_URL) ###Output _____no_output_____ ###Markdown 2. Get content : Table format- TYPE- TEXT ("plain_text") (if "paragraph" then concat "plain_text" in list "text")- ID- HAS_CHILDREN- CREATED_TIME- LAST_EDITED_TIME ###Code # get pages content get_page_content(TOKEN_API, PAGE_URL) ###Output _____no_output_____ ###Markdown Notion - Get page **Tags:** notion productivity Input Import library ###Code from notion import Notion import naas ###Output _____no_output_____ ###Markdown Variables ###Code # Enter Notion Token API token = "secret_R1CrUGn8bx9itbJW0Fc9Cc0R9Lmhbnz2ayqEe0GhRPq" # Enter database url url = "https://www.notion.so/Axel-2ccdafe28955478b8c9d70bda0044c86" notion = Notion().connect(token) ###Output _____no_output_____ ###Markdown Model Get properties ###Code page = notion.page() ###Output _____no_output_____ ###Markdown Get content👉 The content of a page is return as a array of blocks by the Notion API ```json{ "object": "block", "id": "9bc30ad4-9373-46a5-84ab-0a7845ee52e6", "created_time": "2021-03-16T16:31:00.000Z", "last_edited_time": "2021-03-16T16:32:00.000Z", "has_children": false, "type": "to_do", "to_do": { "text": [ { "type": "text", "text": { "content": "Lacinato kale", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Lacinato kale", "href": null } ], "checked": false }}```Each block is a dictionary with different keys:- id *(str)*- has_children *(bool)*- created_time *(str)*- last_edited_time *(str)*- type *(str)*- {type} *(dict)*{type} is an object with type-specific block informationList of block type:- paragraph- heading(1,2,3)- bullet list item- numbered list item- to_do_blocks- toggle block- child page blockMore info here: https://developers.notion.com/reference/block 🚨 BEAWARE OF:- I can't retreive the children element of a block: it's not the same behaviour than the one in Block Object look like it's a bug from the API - Some data information are lost. Exemple: the color of the text and the link - blank blocks are count as a paragraph we maybe need to create a new category for them or delete them from the result ###Code def fetch_raw_blocks(token_api, page_url): page_id = get_id_from_url(page_url) url = f'https://api.notion.com/v1/blocks/{page_id}/children' headers = create_headers(token_api, _VERSION) response = requests.get(url, headers=headers) res = requests.get(url, headers=headers) try: res.raise_for_status() except requests.HTTPError as e: return e return res.json()['results'] blocks = fetch_raw_blocks(TOKEN_API, PAGE_URL) pprint(blocks[0]) ###Output _____no_output_____ ###Markdown Functions ###Code _VERSION = '2021-08-16' headers = {'Authorization': f'Bearer {TOKEN_API}', 'Notion-Version': f'{_VERSION}'} def get_id_from_url(url): return url.split('-')[-1] def get(url): res = requests.get(url, headers=headers) try: res.raise_for_status() except requests.HTTPError as e: return e return res.json() def get_content(page_url): # Get id from page page_id = get_id_from_url(page_url) url = f'https://api.notion.com/v1/blocks/{page_id}/children' # Get json res_json = get(url) blocks = res_json.get("results") # Parse json contents = [] for block in blocks: content = { "object": block.get("object"), "id": block.get("id"), "type": block.get("type"), "has_children": block.get("has_children"), "created_time": datetime.strptime(block.get("created_time"), '%Y-%m-%dT%H:%M:%S.000Z'), "last_edited_time": datetime.strptime(block.get("last_edited_time"), '%Y-%m-%dT%H:%M:%S.000Z'), } type_contents = block.get(block.get("type")).get("text") if type_contents is not None: for t in type_contents: annotations = t.get("annotations") content.update(annotations) text = t.get("text") content.update(text) contents.append(content) df = pd.DataFrame(contents) return df ###Output _____no_output_____ ###Markdown Output Get properties : Table format ###Code df_properties = get_properties(PAGE_URL) df_properties PAGE_URL = "https://www.notion.so/naas-official/Weekly-Sync-20-05-2021-f59c898f5ad34587a544f9e06b45be5f" # Get id from page page_id = get_id_from_url(PAGE_URL) url = f'https://api.notion.com/v1/pages/{page_id}' # Get json res_json = get(url) # Parse json content = { "object": res_json.get("object"), "id": res_json.get("id"), "url": res_json.get("url"), "archived": res_json.get("archived"), "created_time": datetime.strptime(res_json.get("created_time"), '%Y-%m-%dT%H:%M:%S.000Z'), "last_edited_time": datetime.strptime(res_json.get("last_edited_time"), '%Y-%m-%dT%H:%M:%S.000Z'), } properties = res_json.get("properties") pprint(properties) for p in properties: print(p) ###Output _____no_output_____ ###Markdown Get content : Table format ###Code df_content = get_content(PAGE_URL) df_content ###Output _____no_output_____
notebooks/04-teachPCA.ipynb
###Markdown pcathis notebook was used to help me explain what pca is, in a 1-1 teaching session i had with somebody pca is a coordinate transformation ###Code m = pd.read_csv('male.csv', index_col=0) m pca = PCA() m_pca = pca.fit_transform(m) m_pca = pd.DataFrame(m_pca, columns = ['pca1', 'pca2', 'pca3', 'pca4', 'pca5', 'pca6'], index = m.index) m_pca pca.transform( np.array( [[2.4, 10, 1, 8, 1996, 2005]] ) ) pca.inverse_transform( np.array( [[-7.47693593, 8.75298503, 0.55257224, -1.3116692, -1.0573659, 0.52552363]] ) ) ###Output _____no_output_____ ###Markdown Predictions of original data using subset of components ###Code m_pca1 = m_pca.copy() m_pca1[['pca2', 'pca3', 'pca4', 'pca5', 'pca6']] = 0 m_pca1 pd.concat([m, pd.DataFrame(pca.inverse_transform(m_pca1), index=m.index)], axis=1) m_pca2 = m_pca.copy() m_pca2[['pca3', 'pca4', 'pca5', 'pca6']] = 0 m_pca2 pd.concat([m, pd.DataFrame(pca.inverse_transform(m_pca2), index=m.index)], axis=1) pca.explained_variance_ pca.explained_variance_ratio_ ###Output _____no_output_____ ###Markdown what is the pca transformation explicitly? ###Code from numpy.linalg import inv pca_matrix = pca.components_ pca_inv_matrix = inv(pca_matrix) pca_matrix means = m.mean(axis = 0) means pca.inverse_transform( np.array( [[0,0,0,0,0,0]] ) ) means + np.dot(np.array([-7.47,0,0,0,0,0]), pca_matrix) pca.inverse_transform( np.array( [[-7.47,0,0,0,0,0]] ) ) ###Output _____no_output_____
tutorials/2_Upload_new_footage.ipynb
###Markdown KSO Tutorials 2: Upload new footageWritten by @jannesgg and @vykantonLast updated: Apr 26th, 2022 Set up and requirements Import Python packages ###Code %load_ext autoreload %autoreload 2 # Set the directory of the libraries import sys sys.path.append('..') # Set to display dataframes as interactive tables from itables import init_notebook_mode init_notebook_mode(all_interactive=True) # Import required modules import kso_utils.tutorials_utils as t_utils import kso_utils.t2_utils as t2 import kso_utils.project_utils as p_utils #import utils.server_utils as serv_utils print("Packages loaded successfully") ###Output _____no_output_____ ###Markdown Choose your project ###Code project_name = t_utils.choose_project() project = p_utils.find_project(project_name=project_name.value) ###Output _____no_output_____ ###Markdown Initiate sql and get server or local storage details ###Code db_info_dict = t_utils.initiate_db(project) ###Output Enter the key id for the aws server········ Enter the secret access key for the aws server········ ###Markdown Select the survey linked to the deployment ###Code survey_i = t2.select_survey(db_info_dict) survey_name = t2.confirm_survey(survey_i, db_info_dict) ###Output The details of the new survey are: AccessRights --> BUVType --> Ben's circular base rig ContractNumber --> ContractorName --> EncoderName --> Victor Ant FishMultiSpecies --> True IsLongTermMonitoring --> True LinkReport01 --> LinkReport02 --> LinkReport03 --> LinkReport04 --> LinkToContract --> LinkToFieldSheets --> LinkToMarineReserve --> Tapuae Marine Reserve LinkToOriginalData --> LinkToPicture --> OfficeName --> Ngāmotu / New Plymouth Office RightsHolder --> Department of Conservation, New Zealand Government SiteSelectionDesign --> Random StratifiedBy --> None SurveyLeaderName --> Victor SurveyName --> Example survey SurveyStartDate --> 2022-05-11 SurveyVerbatim --> Some notes UnitSelectionDesign --> Haphazard Vessel --> vessel1 OfficeContact --> Cameron Hunt SurveyLocation --> SLI Region --> Kermadec DateEntry --> 2022-05-17 SurveyType --> BUV SurveyID --> SLI_20220511_BUV Are the survey details above correct? ###Markdown Select new deployments To save time you can select multiple deployments **recorded on the same day** ###Code deployment_selected, survey_row, survey_server_name = t2.select_deployment( project = project, db_info_dict = db_info_dict, survey_i = survey_i ) ###Output _____no_output_____ ###Markdown Specify the date of the deployments ###Code deployment_date = t2.select_eventdate(survey_row) ###Output _____no_output_____ ###Markdown Check the database to avoid deployment duplicates ###Code deployment_filenames = t2.check_deployment( deployment_selected, deployment_date, survey_server_name, db_info_dict, survey_i ) ###Output There is no existing information in the database for ['SLI_NEW_001_11_05_2022.MP4', 'SLI_NEW_003_11_05_2022.MP4'] You can continue uploading the information. ###Markdown Update new deployment files ###Code movie_files_server = t2.update_new_deployments( deployment_filenames, db_info_dict, survey_server_name, deployment_date ) ###Output SLI_NEW_001 The files ['tapuae-buv-2022/SLI_NEW_001/SLI-001-Part 1.MP4', 'tapuae-buv-2022/SLI_NEW_001/SLI-001-Part 2.MP4', 'tapuae-buv-2022/SLI_NEW_001/SLI-001-Part 3.MP4'] will be concatenated ###Markdown Specify deployment details ###Code deployment_info = t2.record_deployment_info( deployment_names, db_info_dict ) ###Output _____no_output_____ ###Markdown Review deployment details ###Code new_deployment_row = t2.confirm_deployment_details( deployment_names, survey_server_name, db_info_dict, survey_i, deployment_info, movie_files_server, deployment_date ) ###Output _____no_output_____ ###Markdown !!!Only pass this point if deployment details are correct!!! Update movies csv and upload video to s3 ###Code t2.upload_concat_movie( db_info_dict, new_deployment_row ) #END ###Output _____no_output_____
agreg/Robots.ipynb
###Markdown Table of Contents 1&nbsp;&nbsp;Texte d'oral de modélisation - Agrégation Option Informatique1.1&nbsp;&nbsp;Préparation à l'agrégation - ENS de Rennes, 2017-181.2&nbsp;&nbsp;À propos de ce document1.3&nbsp;&nbsp;Question de programmation1.3.1&nbsp;&nbsp;Plus ou moins de liberté dans le choix de modélisation ?1.4&nbsp;&nbsp;Réponse à l'exercice requis, première approche1.4.1&nbsp;&nbsp;Choix des structures de données1.4.2&nbsp;&nbsp;Fonction transition1.4.3&nbsp;&nbsp;Exemple1.5&nbsp;&nbsp;Une autre approche avec des chaînes de Markov1.5.1&nbsp;&nbsp;Échantillonage pondéré1.5.2&nbsp;&nbsp;Simuler une étape d'une chaîne de Markov ?1.5.3&nbsp;&nbsp;Modéliser nos robots avec des chaînes de Markov1.5.4&nbsp;&nbsp;Exemple 11.5.5&nbsp;&nbsp;Exemple 21.5.6&nbsp;&nbsp;Conclusion de cette approche par des chaînes de Markov1.6&nbsp;&nbsp;Conclusion Texte d'oral de modélisation - Agrégation Option Informatique Préparation à l'agrégation - ENS de Rennes, 2017-18- *Date* : 12 janvier 2018, démonstration d'un oral d'agrégation.- *Auteur* : [Lilian Besson](https://GitHub.com/Naereen/notebooks/)- *Texte*: Annale 2008, ["Robots"](http://agreg.org/Textes/pub2008-D1.pdf) À propos de ce document- Ceci est une *proposition* de correction, partielle et probablement non-optimale, pour la partie implémentation d'un [texte d'annale de l'agrégation de mathématiques, option informatique](http://Agreg.org/Textes/).- Ce document est un [notebook Jupyter](https://www.Jupyter.org/), et [est open-source sous Licence MIT sur GitHub](https://github.com/Naereen/notebooks/tree/master/agreg/), comme les autres solutions de textes de modélisation que [j](https://GitHub.com/Naereen)'ai écrite cette année.- L'implémentation sera faite en OCaml, version 4+ : ###Code print_endline Sys.ocaml_version;; Sys.command "ocaml -version";; let print = Printf.printf;; ###Output _____no_output_____ ###Markdown ---- Question de programmationLa question de programmation pour ce texte était donnée au milieu, en page 3 :> « On suppose donnés les tableaux $T_i$. Un état du système est représenté par un vecteur $U$ de longueur $n$, dont la $i$-ème composante contient la position du robot $R_i$ (sous forme du numéro $j$ du lieu $L_j$ où il se trouve). »> « Écrire une fonction/procédure/méthode transition qui transforme $U$ en un état suivant du système (on admettra que les données sont telles qu’il existe un état suivant).Simuler le système de robots pendant $n$ unités de temps. On prendra comme état initial du robot $R_i$ le premier élément du tableau $T_i$ . » Plus ou moins de liberté dans le choix de modélisation ?Vous remarquez que même avec une question bien précise comme celle-là, on dispose d'une relative liberté : la question n'impose pas le choix de modélisation !Elle pourrait être traitée avec un automate ou graphe produit non simplifié, ou un graphe produit plus réduit (1ère solution), mais on aurait tout aussi pu utiliser une approche probabiliste, avec des chaînes de Markov par exemple (2ème solution). ---- Réponse à l'exercice requis, première approche> Merci à Romain Dubourg (2017) pour son code.> Cette deuxième solution est bien plus concise que l'approche avec un graphe produit non simplifié, en utilisant une approche plus directe. Choix des structures de donnéesEn utilisant des tableaux, `int array`, au lieu de listes, pour représenter les états $u$ on peut modifier l'état *en place* ! ###Code type etat = int array;; type liste_rdv = (int array) array;; ###Output _____no_output_____ ###Markdown Avec l'exemple du texte.![Premier exemple de robots](images/robots_exemple1.png) ###Code let ex1_1 : etat = [| 1; 1; 2 |];; let ex1 : liste_rdv = [| [| 1; 3 |]; [| 1; 2 |]; [| 2; 3 |] |];; ###Output _____no_output_____ ###Markdown On peut facilement trouver la première position de `x` dans une liste et dans un tableau et `-1` sinon. ###Code let trouve (x:int) (a:int list) : int = let rec aux (x:int) (a:int list) (i : int) : int = match a with | [] -> -1 | t :: _ when (t = x) -> i | _ :: q -> aux x q (i+1) in aux x a 0 ;; let trouve_array (x : int) (a : int array) : int = trouve x (Array.to_list a) ;; let _ = trouve_array 2 ex1_1;; (* 2 *) ###Output _____no_output_____ ###Markdown On a besoin de pouvoir obtenir la liste des paires de robots pouvant réaliser un rendez-vous. ###Code let rdv (u : etat) : ((int * int) list) = let n = Array.length u in let ls = ref [] in (* plus simple que d'imbriquer les List.filter et List.map... *) for k = 0 to n - 1 do let i = trouve_array u.(k) (Array.sub u (k + 1) (n - (k + 1))) in if i >= 0 then ls := (k, i + k + 1) :: !ls; done; !ls ;; ###Output _____no_output_____ ###Markdown Un rapide, pour visualiser le fonctionnement de la fonction `rdv`. ###Code let _ = rdv ex1_1 ;; let ex3_1 = [| 1; 2; 1; 4; 4 |];; let _ = rdv ex3_1 ;; ###Output _____no_output_____ ###Markdown Étant donné un état et une paire de robots pouvant réaliser un rendez-vous, la fonction suivante le réalise, en modifiant *en place* l'état $u$.C'est bien plus simple que de traiter avec une approche fonctionnelle.Pour une fonction comme ça, il faut absolument :- utiliser des variables intermédiaires,- et des noms de variables un peu explicites (attention aux `1`, `i`, `I` et `l` qui se ressemblent beaucoup une fois projetés au tableau !). ###Code let realise_rdv (u : etat) (lr : liste_rdv) (xy : int * int) : etat = let x, y = xy in let ux = u.(x) and uy = u.(y) in let rx, ry = lr.(x), lr.(y) in u.(x) <- rx.(((trouve_array ux rx) + 1) mod (Array.length rx)); u.(y) <- ry.(((trouve_array uy ry) + 1) mod (Array.length ry)); u ;; ###Output _____no_output_____ ###Markdown Un rapide, pour visualiser le fonctionnement de la fonction `realise_rdv`. ###Code let u = [| 0; 0; 1 |];; let l = [| [|0; 2|]; [|1; 2|]; [|2; 0|] |];; let _ = realise_rdv u l (0, 1) ;; ###Output _____no_output_____ ###Markdown On vérifie que l'état `u` a bien été modifié en place : ###Code u;; ###Output _____no_output_____ ###Markdown Fonction `transition`Et enfin, on calcule l'état suivant $u_1$ à partir de l'état $u_0$, en appliquant la fonction `realise_rdv` à chaque état qui peut être modifié. ###Code let transition (u0 : etat) (l0 : liste_rdv) : etat = List.iter (fun u -> ignore (realise_rdv u0 l0 u)) (rdv u0); u0 ;; ###Output _____no_output_____ ###Markdown On effectue `n` transitions successives, non pas avec une approche récursive (qui ne serait pas récursive terminale, et donc avec une mémoire de pile d'appel linéaire en $\mathcal{O}(n)$), mais avec une simple boucle `for`. ###Code let rec n_transitions_trop_couteux (u : etat) (l : liste_rdv) (n : int) : etat = if (n = 0) then u else n_transitions_trop_couteux (transition u l) l (n-1) ;; let n_transitions (u : etat) (l : liste_rdv) (n : int) : etat = for _ = 1 to n do ignore (transition u l) (* u est changé en place *) done; u ;; ###Output _____no_output_____ ###Markdown ExempleAvec l'exemple du texte : ###Code let _ = ex1;; let _ = ex1_1;; let _ = transition ex1_1 ex1;; let _ = transition ex1_1 ex1;; let _ = transition ex1_1 ex1;; let _ = transition ex1_1 ex1;; ###Output _____no_output_____ ###Markdown Avec l'autre exemple donné à l'oral, qui est un peu différent de celui du texte.Quatre robots, $R_0$, $R_1$, $R_2$, $R_3$, ont comme liste de rendez-vous successifs, $T_0 = [0, 1, 2]$, $T_1 = [0]$, $T_2 = [1, 3]$ et $T_3 = [2, 3]$. ###Code let ex2 = [| [|0; 1; 2|]; [|0|]; [|1; 3|]; [|2; 3|] |];; let ex2_1 = [| 0; 0; 1; 2 |];; let _ = n_transitions ex2_1 ex2 3;; ###Output _____no_output_____ ###Markdown C'est trivial, mais il peut être utile de vérifier que `n_transitions 3` fait pareil que trois appels à `transition` : ###Code let ex2_1 = [| 0; 0; 1; 2 |];; (* il faut l'écrire, il a été modifié *) let _ = transition ex2_1 ex2;; let _ = transition ex2_1 ex2;; let _ = transition ex2_1 ex2;; let _ = transition ex2_1 ex2;; ###Output _____no_output_____ ###Markdown Avec un autre état initial : ###Code let ex2_2 = [| 0; 0; 3; 2 |];; let _ = transition ex2_2 ex2;; let _ = transition ex2_2 ex2;; (* On bloque !*) let _ = transition ex2_2 ex2;; (* On bloque !*) ###Output _____no_output_____ ###Markdown Et avec encore un autre état initial : ###Code let ex2_3 = [| 0; 0; 3; 3 |];; let _ = transition ex2_3 ex2;; let _ = transition ex2_3 ex2;; let _ = transition ex2_3 ex2;; (* On a un cycle de taille 3 *) let _ = transition ex2_3 ex2;; ###Output _____no_output_____ ###Markdown ---- Une autre approche avec des chaînes de MarkovVoici une troisième modélisation, avec des matrices et des [chaînes de Markov](https://fr.wikipedia.org/wiki/Cha%C3%AEne_de_Markov).L'idée de base vient de l'observation suivante : **ajouter de l'aléa dans les déplacements des robots devraient permettre de s'assurer (avec une certaine probabilité) que tous les rendez-vous sont bien effectués.** Échantillonage pondéréLe module [`Random`](http://caml.inria.fr/pub/docs/manual-ocaml/libref/Random.html) va être utile.En Python, on a `numpy.random.choice` pour faire cet échantillonage pondéré, pas en Caml, donc on va l'écrire manuellement. ###Code Random.init 0;; ###Output _____no_output_____ ###Markdown Etant donné une distribution discrète $\pi = (\pi_1,\dots,\pi_N)$ sur $\{1,\dots,N\}$, la fonction suivante permet de générer un indice $i$ tel que$$ \mathbb{P}(i = k) = \pi_k, \forall k \in \{1,\dots,N\}.$$ ###Code let weight_sampling (pi : float array) () = let p = Random.float 1. in let i = ref (-1) in let acc = ref 0. in while !acc < p do incr i; acc := (!acc) +. pi.(!i); done; !i ;; ###Output _____no_output_____ ###Markdown Par exemple, tirer 100 échantillons suivant la distribution $\pi = [0.5, 0.1, 0.4]$ devrait donner environ $50$ fois $0$, $10$ fois $1$ et $40$ fois $2$ : ###Code let compte (a : 'a array) (x : 'a) : int = Array.fold_left (fun i y -> if y = x then i + 1 else i) 0 a ;; let echantillons = Array.init 100 (fun _ -> weight_sampling [| 0.5; 0.1; 0.4 |] ()) ;; compte echantillons 0;; compte echantillons 1;; compte echantillons 2;; ###Output _____no_output_____ ###Markdown $46/100$, $13/100$ et $41/100$, c'est pas trop loin de $0.5, 0.1, 0.4$. Simuler une étape d'une chaîne de Markov ?On peut utiliser cette fonction pour suivre une transition, aléatoire, sur une chaîne de Markov. ###Code let markov_1 (a : float array array) (i : int) : int = let pi = a.(i) in weight_sampling pi () ;; ###Output _____no_output_____ ###Markdown Avec un petit exemple définit, on peut voir le résultat de $100$ transitions différentes depuis l'état $0$ : ###Code let a = [| [| 0.4; 0.3; 0.3 |]; [| 0.3; 0.4; 0.3 |]; [| 0.3; 0.3; 0.4 |] |] ;; print "\n";; for _ = 0 to 100 do print "%i" (markov_1 a 0); done;; flush_all ();; ###Output _____no_output_____ ###Markdown On peut suivre plusieurs transitions : ###Code let markov_n (a : float array array) (etat : int) (n : int) : int = let u = ref etat in for _ = 0 to n-1 do u := markov_1 a !u; done; !u ;; markov_n a 0 10;; ###Output _____no_output_____ ###Markdown Et pour plusieurs robots, c'est pareil : chaque robots a un état (`robots.(i)`) et une matrice de transition (`a.(i)`) : ###Code let markovs_n (a : float array array array) (robots : int array) (n : int) : int array = Array.mapi (fun i u -> markov_n a.(i) u n) robots ;; ###Output _____no_output_____ ###Markdown Si par exemple chaque état a la même matrice de transition : ###Code markovs_n [|a; a; a|] [|0; 1; 2|] 10;; ###Output _____no_output_____ ###Markdown Modéliser nos robots avec des chaînes de MarkovPlutôt que d'imposer à chaque robot un ordre fixe de ses rendez-vous, on va leur donner une probabilité uniforme d'aller, après un rendez-vous, à n'importe lequel de leur rendez-vous.- Cela demande de transformer la liste $T_1,\dots,T_n$ de rendez-vous en $n$ matrices de transition de chaînes de Markov, une par robot.- Et ensuite de simuler chaque chaîne de Markov, parant d'un état initial $T_i[0]$. ###Code (* Fonctions utiles *) Array.init;; Array.make_matrix;; Array.iter;; let mat_proba_depuis_rdv (ts : int array array) : (float array array) array = let n = Array.length ts in let a = Array.init n (fun _ -> Array.make_matrix n n 0.) in for i = 0 to n-1 do (* Pour le robot R_i, ses rendez-vous T_i sont ts.(i) *) let r = ts.(i) in let m = Array.length r in let p_i = 1. /. (float_of_int m) in (* Pour chaque rendez-vous L_j dans T_i, remplir a.(j).(k) par 1/m, et aussi a.(k).(j) par 1/m pour chaque autre k dans L_j *) for j = 0 to m-1 do for k = 0 to m-1 do a.(i).(r.(j)).(r.(k)) <- p_i; a.(i).(r.(k)).(r.(j)) <- p_i; done; done; done; a ;; ###Output _____no_output_____ ###Markdown Avec les fonctions précédentes, on peut faire évoluer le système. ###Code let simule_markov_robots (ts : int array array) (etats : int array) (n : int) : int array = let a = mat_proba_depuis_rdv ts in markovs_n a etats n ;; ###Output _____no_output_____ ###Markdown Exemple 1On commence avec le premier exemple du texte, avec trois robots $R_1, R_2, R_3$, qui ont comme tableaux de rendez-vous $T_1 = [1, 3]$, $T_2 = [2, 1]$ et $T_3 = [3, 2]$.Ce système n'est pas bloqué, mais aucun rendez-vous n'est réalisé avec l'approche statique.L'approche probabiliste permettra, espérons, de résoudre ce problème.![Premier exemple de robots](images/robots_exemple1.png) ###Code let ex3 = [| [|0; 2|]; [|1; 0|]; [|2; 1|]|] ;; ###Output _____no_output_____ ###Markdown On peut écrire une fonction qui récupère l'état initial dans lequel se trouve chaque robot (le texte donnait comme convention d'utiliser le premier de chaque liste). ###Code let premier_etat (rdvs : int array array) : int array = Array.init (Array.length rdvs) (fun i -> rdvs.(i).(0)) ;; let ex3_1 = premier_etat ex3;; ###Output _____no_output_____ ###Markdown On vérifie la matrice de transition produite par `mat_proba_depuis_rdv` : ###Code mat_proba_depuis_rdv ex3;; ###Output _____no_output_____ ###Markdown On peut vérifier que ces chaînes de Markov représentent bien le comportement des robots, par exemple le premier robot $R_1$ a $T_1 = [1, 3]$, donc il alterne entre l'état $0$ et $2$, avec la matrice de transition$$ \mathbf{A}_i := \begin{bmatrix} 1/2 & 0 & 1/2 \\ 0 & 0 & 0 \\ 1/2 & 0 & 1/2\end{bmatrix}. $$ Et enfin on peut simuler le système, par exemple pour juste une étape, plusieurs fois (pour bien visualiser). ###Code simule_markov_robots ex3 ex3_1 0;; (* rien à faire ! *) simule_markov_robots ex3 ex3_1 1;; ###Output _____no_output_____ ###Markdown Pour mieux comprendre le fonctionnement, on va afficher les états intermédiaires. ###Code let print = Printf.printf;; let affiche_etat (etat : int array) = Array.iter (fun u -> print "%i " u) etat; print "\n"; flush_all (); ;; affiche_etat ex3_1;; let u = ref ex3_1 in for _ = 0 to 10 do affiche_etat !u; u := simule_markov_robots ex3 !u 1; done;; ###Output 0 1 2 2 1 2 0 1 1 0 0 2 0 1 1 0 0 1 0 1 1 0 1 1 2 1 1 2 0 1 0 1 1 ###Markdown On constate que des rendez-vous ont bien été effectués !Pas à chaque fois, mais presque.En tout cas, ça fonctionne mieux que l'approche naïve, peu importe l'état initial. ###Code let u = ref [| 0; 1; 1 |] in for _ = 0 to 10 do affiche_etat !u; u := simule_markov_robots ex3 !u 1; done;; ###Output 0 1 1 0 1 2 0 0 1 0 0 2 2 1 2 0 0 1 2 1 2 2 0 1 2 1 1 0 1 1 0 0 1 ###Markdown Sur 11 états, le rendez-vous 0 a été fait 1 fois, le 1 a été fait 4 fois, et le 2 a été fait 4 fois aussi. Soit 9 sur 11 étapes utiles ! Pas mal !> (ce texte n'est pas valide à chaque exécution à cause de l'aléa...) Exemple 2Puis le second exemple :![Premier exemple de robots](images/robots_exemple2.png) ###Code let ex4 = [| [|0; 3|]; [|0; 1|]; [|1; 2|]; [|2; 3|] |];; let ex4_1 = premier_etat ex4;; ###Output _____no_output_____ ###Markdown On vérifie la matrice de transition produite par `mat_proba_depuis_rdv` : ###Code mat_proba_depuis_rdv ex4;; ###Output _____no_output_____ ###Markdown Et enfin on peut simuler le système, par exemple pour juste une étape, plusieurs fois (pour bien visualiser). ###Code simule_markov_robots ex4 ex4_1 0;; (* rien à faire ! *) simule_markov_robots ex4 ex4_1 1;; let u = ref ex4_1 in for _ = 0 to 10 do affiche_etat !u; u := simule_markov_robots ex4 !u 1; done;; ###Output 0 0 1 2 0 0 1 2 0 0 1 3 3 0 1 3 0 0 1 3 0 1 1 2 0 1 2 3 3 0 2 3 0 1 2 3 0 0 2 2 3 0 1 2
1. Clean HighScores.ipynb
###Markdown Cleaning the HighscoresThere is a lot of cleaning that needs to be done before the highscores can be read into a pandas dataframe, not all of which can be done programmatically. I started by taking a copy of the HighScores.vcd file (may appear as a .dat file), opening it with Notepad (PSPad will open it in the hex editor) and saving it as HighScores.txt. Manually I delete everything down as far as the first occurance of ===, this looks something like: Assembly-CSharpXRL.Core.ScoreboardScores�System.Collections.Generic.List`1[[XRL.Core.ScoreEntry, Assembly-CSharp....There are a number of symbols which will prevent the file being read into ipython fully or will cause trouble when writing the data to file and these need to be deleted manually as well. These symbols are included in notes.txt, which should be opened in Notepad, and these can be removed by using Edit -> Replace. There is also a circle shape that needs to be removed, which can be hard to find. This is usually the symbol for the bits that make up an artifact, so it may be best to delete everything between a . If going through this code using your own highscores you may have to manually delete more lines or symbols.After all that the file can finally be read in! ###Code #This is what the HighScores file now looks like qud = open("HighScores.txt", "r") print qud.read() qud.close() ###Output === Game summary for &WGoethe II&y === Game ended Thursday, August 13, 2015 at 6:04:58 PM Goethe II died on the 20th of Uru Ux. from &yWahmahcalcalit&y's lase beam! (x2) Scored &C48753&y points Survived for 35235 turns. Visited 260 zones. Generated 1 storied items. Most advanced artifact in possession: HE Missile&y <   � = �=== Game summary for &WKant XVIII&y === Game ended Sunday, August 30, 2015 at 7:34:00 PM Kant XVIII died on the 27th of Tuum Ut. The &ychute crab&y hits &w(x1)&y for 2 damage with his &Ycrab claw ->&y7 &r&y1d2&y!&y [7] Scored &C40178&y points Survived for 37145 turns. Visited 222 zones. Generated 1 storied items. Most advanced artifact in possession: Fix-It spray foam x2&y =   LP > �=== Game summary for &WO`Brien III&y === Game ended Wednesday, September 02, 2015 at 3:50:10 AM O`Brien III died on the 6th of Tebet Ux. The &rbloody&y &MKumukokumu the Stylish, &Mlegendary &Mogre ape&y&y hits &M(x8)&y for 51 damage with his &Yape fist &c ->&y20 &r&y3d3&y!&y [10] Scored &C20556&y points Survived for 21114 turns. Visited 130 zones. Most advanced artifact in possession: force bracelet &b&y0 &K &y0 &y<&b&c&R&B&y> &y[&Kno cell&y]&y >   �B ? �=== Game summary for &WKant XII&y === Game ended Friday, August 28, 2015 at 11:14:48 PM Kant XII died on the 7th of Iyur Ut. &YPutus Templar warden&y hits &w(x1)&y for 3 damage with his &Bfolded carbide&y long sword ->&y9 &r&y2d5&y!&y [20] Scored &C17061&y points Survived for 17066 turns. Visited 118 zones. Most advanced artifact in possession: &Welectro&cbow &y<&g&c&G&G&y> ->&y10 &r&y1d6 &y[&Kno cell&y]&y ?   �@ @ �=== Game summary for &WNietzsche III&y === Game ended Wednesday, August 05, 2015 at 8:00:46 PM Nietzsche III died on the 19th of Tishru ii Ux. The &Yeyeless king crab&y hits &M(x6)&y for 20 damage with his &Ymassive king crab claw ->&y20 &r&y1d6&y!&y [11] Scored &C16607&y points Survived for 16124 turns. Visited 115 zones. Most advanced artifact in possession: &cu&gb&Ge&Wr&wn&co&Cs&Gt&gr&wu&Wm&y injector &y<&K&C&y>&y @   �& A � === Game summary for &WKant XI&y === Game ended Thursday, August 27, 2015 at 11:14:09 PM Kant XI died on the 12th of Uru Ux. from the scalding steam! Scored &C9971&y points Survived for 11105 turns. Visited 60 zones. Most advanced artifact in possession: Fix-It spray foam&y A   � B �=== Game summary for &WKant VIII&y === Game ended Thursday, August 27, 2015 at 8:01:17 PM Kant VIII died on the 23rd of Tishru i Ux. from the scalding steam! Scored &C7848&y points Survived for 7719 turns. Visited 35 zones. Most advanced artifact in possession: Fix-It spray foam&y B   -> C �=== Game summary for &WKant V&y === Game ended Saturday, August 22, 2015 at 6:22:36 PM Kant V died on the 28th of Tishru i Ux. Scored &C6662&y points Survived for 8118 turns. Visited 47 zones. Most advanced artifact in possession: semi-automatic pistol x2 &y<&b&G&G&B&C&y> ->&y8 &r&y1d6 &y[&KEmpty&y]&y C   _ D �=== Game summary for &WO`Brien II&y === Game ended Wednesday, September 02, 2015 at 12:53:32 AM O`Brien II died on the 23rd of Nivvun Ut. from &yDuhmahcaluhcal&y's lase beam! (x3) Scored &C5471&y points Survived for 6356 turns. Visited 42 zones. Most advanced artifact in possession: &cstun &cgas grenade mk I&y D   a E �=== Game summary for &WKant XVII&y === Game ended Saturday, August 29, 2015 at 1:44:21 AM Kant XVII died on the 27th of Tishru ii Ux. Scored &C3169&y points Survived for 7239 turns. Visited 37 zones. Most advanced artifact in possession: compass bracelet &b&y0 &K &y0&y E   D F �=== Game summary for &WKant&y === Game ended Friday, August 21, 2015 at 11:08:22 PM Kant died on the 12th of Tuum Ut. The giant amoeba&y hits &W(x2)&y for 4 damage with his &Ggiant pseudopod ->&y10 &r&y1d3&y!&y [15] Scored &C2372&y points Survived for 3013 turns. Visited 21 zones. F   9 G �=== Game summary for &WKant XIII&y === Game ended Saturday, August 29, 2015 at 12:05:24 AM Kant XIII died on the 5th of Tebet Ux. The girshling&y hits &w(x1)&y for 6 damage with his claw ->&y2 &r&y1d6&y!&y [12] Scored &C1849&y points Survived for 2853 turns. Visited 18 zones. Most advanced artifact in possession: pump shotgun ->&y8 &r&y1d2 &y[shotgun shell&y]&y G   � H �=== Game summary for &WKhrushchev VIII&y === Game ended Sunday, August 02, 2015 at 6:19:47 PM Khrushchev VIII died on the 5th of Tuum Ut. from bleeding! Scored &C1760&y points Survived for 4198 turns. Visited 25 zones. H   � I �=== Game summary for &WNietzsche&y === Game ended Tuesday, August 04, 2015 at 8:18:00 PM Nietzsche died on the 17th of Tuum Ut. The &rbloody&y horned chameleon&y hits &w(x1)&y for 4 damage with his Tusks ->&y4 &r&y2d3 &b&y0 &K &y0&y!&y [7] Scored &C1503&y points Survived for 3719 turns. Visited 27 zones. I   q J �=== Game summary for &WKant IV&y === Game ended Saturday, August 22, 2015 at 12:00:38 AM Kant IV died on the 28th of Iyur Ut. The snapjaw scavenger&y hits &w(x1)&y for 6 damage with his &Ysteel&y battle axe ->&y3 &r&y1d6&y!&y [18] Scored &C1393&y points Survived for 3347 turns. Visited 16 zones. Most advanced artifact in possession: &Gacid &cgas grenade mk I&y J   p K �=== Game summary for &WKhrushchev XI &y === Game ended Monday, August 03, 2015 at 3:18:32 PM Khrushchev XI died on the 20th of Kisu Ux. Scored &C1136&y points Survived for 4121 turns. Visited 21 zones. Most advanced artifact in possession: &rbl&Ra&Wz&Ye&y injector&y K   � L �=== Game summary for &WLenin&y === Game ended Saturday, August 01, 2015 at 4:08:30 PM Lenin died on the 13th of Iyur Ut. The equimax&y hits &r(x3)&y for 9 damage with his bite ->&y8 &r&y2d2&y!&y [17] Scored &C902&y points Survived for 1019 turns. Visited 12 zones. L   � M �=== Game summary for &WKhrushchev&y === Game ended Sunday, August 02, 2015 at 3:52:52 PM Khrushchev died on the 10th of Tishru ii Ux. The &rbloody&y &MGroubuubu-wof-wofuz, the stalwart Snapjaw Tot-eater&y hits &W(x2)&y for 11 damage with his &bcarbide&y battle axe ->&y5 &r&y2d3&y!&y [22] Scored &C708&y points Survived for 2789 turns. Visited 21 zones. M    N �=== Game summary for &WKant X&y === Game ended Thursday, August 27, 2015 at 9:17:34 PM Kant X died on the 8th of Kisu Ux. The cave spider&y hits &w(x1)&y for 2 damage with his fangs ->&y2 &r&y1d2&y!&y [19] Scored &C265&y points Survived for 2287 turns. Visited 16 zones. N   � O �=== Game summary for &WO`Brien IV&y === Game ended Wednesday, September 02, 2015 at 12:47:21 PM O`Brien IV died on the Ides of Uru Ux. The salthopper&y hits &r(x3)&y for 7 damage with his &Grending mandibles ->&y11 &r&y1d4&y!&y [15] Scored &C199&y points Survived for 1810 turns. Visited 14 zones. Most advanced artifact in possession: semi-automatic pistol ->&y8 &r&y1d6 &y[&KEmpty&y]&y O   P �=== Game summary for &WKhrushchev II&y === Game ended Sunday, August 02, 2015 at 4:13:32 PM Khrushchev II died on the 17th of Uru Ux. from bleeding! Scored &C13&y points Survived for 1640 turns. Visited 9 zones. P ->  ����Q �=== Game summary for &WStalin&y === Game ended Saturday, August 01, 2015 at 3:28:05 PM Stalin died on the 27th of Uru Ux. The salthopper&y hits &r(x3)&y for 9 damage with his &Grending mandibles ->&y11 &r&y1d4&y!&y [16] Scored &C-71&y points Survived for 3412 turns. Visited 30 zones. Q   ����R �=== Game summary for &WKant IX&y === Game ended Thursday, August 27, 2015 at 8:51:15 PM Kant IX died on the 2nd of Iyur Ut. The &gjilted lover&y hits &w(x1)&y for 2 damage with his thorns ->&y5 &r&y1d4&y!&y [16] Scored &C-80&y points Survived for 1807 turns. Visited 13 zones. Most advanced artifact in possession: &Gacid &cgas grenade mk I x2&y R   ����S �=== Game summary for &WKhrushchev VI&y === Game ended Sunday, August 02, 2015 at 5:12:49 PM Khrushchev VI died on the 11th of Tuum Ut. from fire ant&y's flames! Scored &C-126&y points Survived for 1705 turns. Visited 21 zones. S   ���T �=== Game summary for &WStalin&y === Game ended Saturday, August 01, 2015 at 3:41:26 PM Stalin died on the 1st of Tishru i Ux. from the explosion! Scored &C-224&y points Survived for 1061 turns. Visited 8 zones. T   ���U �=== Game summary for &WKant VII&y === Game ended Thursday, August 27, 2015 at 4:50:50 PM Kant VII died on the 22nd of Uulu Ut. Scored &C-243&y points Survived for 770 turns. Visited 8 zones. U   ����V �=== Game summary for &WKant XIV&y === Game ended Saturday, August 29, 2015 at 12:23:09 AM Kant XIV died on the 29th of Tuum Ut. from bleeding! Scored &C-367&y points Survived for 2230 turns. Visited 14 zones. V   N���W �=== Game summary for &WKant XV&y === Game ended Saturday, August 29, 2015 at 12:42:44 AM Kant XV died on the 24th of Nivvun Ut. Scored &C-434&y points Survived for 1332 turns. Visited 10 zones. W !  ����X �=== Game summary for &WKhrushchev&y === Game ended Sunday, August 02, 2015 at 2:13:37 PM Khrushchev died on the 13th of Uru Ux. from bleeding! Scored &C-531&y points Survived for 1863 turns. Visited 17 zones. X "  ����Y �=== Game summary for &WNapolen III&y === Game ended Monday, August 03, 2015 at 4:20:16 PM Napolen III died on the 20th of Shwut Ux. from the fire started by dawnglider&y! Scored &C-543&y points Survived for 1773 turns. Visited 16 zones. Y #  ����Z �=== Game summary for &WNapoleon&y === Game ended Monday, August 03, 2015 at 3:41:37 PM Napoleon died on the 19th of Uulu Ut. from bleeding! Scored &C-547&y points Survived for 1536 turns. Visited 11 zones. Z $  ����[ �=== Game summary for &WNapoleon II&y === Game ended Monday, August 03, 2015 at 3:57:39 PM Napoleon II died on the 22nd of Tishru i Ux. from bleeding! Scored &C-590&y points Survived for 1298 turns. Visited 11 zones. [ %  ����\ �=== Game summary for &WO`Brien V&y === Game ended Wednesday, September 02, 2015 at 12:58:51 PM O`Brien V died on the 17th of Nivvun Ut. Scored &C-618&y points Survived for 1538 turns. Visited 14 zones. \ &  q���] �=== Game summary for &WKant VI&y === Game ended Thursday, August 27, 2015 at 4:36:27 PM Kant VI died on the 14th of Shwut Ux. from &Yyoung ivory&y's impalement. Scored &C-911&y points Survived for 613 turns. Visited 7 zones. ] '  j���^ �=== Game summary for &WKhrushchev V&y === Game ended Sunday, August 02, 2015 at 4:45:07 PM Khrushchev V died on the 19th of Uulu Ut. The &rbloody&y salthopper&y hits &R(x4)&y for 10 damage with his &Grending mandibles ->&y10 &r&y1d4&y!&y [8] Scored &C-918&y points Survived for 510 turns. Visited 6 zones. ^ (  X���_ �=== Game summary for &WKhrushchev IV&y === Game ended Sunday, August 02, 2015 at 4:35:26 PM Khrushchev IV died on the 21st of Nivvun Ut. The snapjaw scavenger&y hits &w(x1)&y for 1 damage with his iron dagger ->&y2 &r&y1d4&y!&y [8] Scored &C-936&y points Survived for 1134 turns. Visited 12 zones. _ )  &���` �=== Game summary for &WKant II&y === Game ended Friday, August 21, 2015 at 11:23:56 PM Kant II died on the 12th of Shwut Ux. The &cscrap shoveler&y hits &M(x5)&y for 8 damage with his &cscrap shovel ->&y15 &r&y1d2&y!&y [5] Scored &C-986&y points Survived for 720 turns. Visited 6 zones. ` *  ���a �=== Game summary for &WMalenkov&y === Game ended Saturday, August 01, 2015 at 4:35:17 PM Malenkov died on the 14th of Tuum Ut. from snapjaw scavenger&y's explosion! Scored &C-1004&y points Survived for 911 turns. Visited 12 zones. a +  ����b �=== Game summary for &WKant XVI&y === Game ended Saturday, August 29, 2015 at 12:46:03 AM Kant XVI died on the 3rd of Uulu Ut. The &rbloody&y equimax&y hits &W(x2)&y for 5 damage with his bite ->&y9 &r&y2d2&y!&y [23] Scored &C-1120&y points Survived for 401 turns. Visited 5 zones. b ,  ����c �=== Game summary for &WKhrushchev VII&y === Game ended Sunday, August 02, 2015 at 5:17:56 PM Khrushchev VII died on the 27th of Uru Ux. The &rbloody&y &Rsalamander&y hits &w(x1)&y for 3 damage with his bite ->&y3 &r&y1d3&y!&y [8] Scored &C-1127&y points Survived for 403 turns. Visited 5 zones. c -  ����d �=== Game summary for &WStalin&y === Game ended Saturday, August 01, 2015 at 2:04:38 PM Stalin died on the 20th of Nivvun Ut. &yUmchuum&y hits &W(x2)&y for 4 damage with his &yUmumerchacal&y!&y [9] Scored &C-1131&y points Survived for 362 turns. Visited 3 zones. Most advanced artifact in possession: &gpoison &cgas grenade mk I&y d .  ����e �=== Game summary for &WKhrushchev IX&y === Game ended Monday, August 03, 2015 at 1:56:40 PM Khrushchev IX died on the 18th of Tishru i Ux. from bleeding! Scored &C-1143&y points Survived for 336 turns. Visited 4 zones. e /  }���f �=== Game summary for &WMalenkov&y === Game ended Saturday, August 01, 2015 at 4:18:54 PM Malenkov died on the 13th of Tishru i Ux. &MRuf-ohoubub, the stalwart Snapjaw Bear-baiter&y hits &W(x2)&y for 10 damage with his &wbronze&y two-handed sword ->&y4 &r&y1d8&y!&y [10] Scored &C-1155&y points Survived for 687 turns. Visited 8 zones. f 0  m���g �=== Game summary for &WNapoleon IV&y === Game ended Monday, August 03, 2015 at 4:28:33 PM Napoleon IV died on the 25th of Uulu Ut. The salthopper&y hits &W(x2)&y for 5 damage with his &Grending mandibles ->&y10 &r&y1d4&y!&y [11] Scored &C-1171&y points Survived for 275 turns. Visited 4 zones. g 1  j���h �=== Game summary for &WO'Brien&y === Game ended Wednesday, September 02, 2015 at 12:03:29 AM O'Brien died on the 9th of Nivvun Ut. Scored &C-1174&y points Survived for 287 turns. Visited 5 zones. h 2  i���i �=== Game summary for &WNapoleon V&y === Game ended Monday, August 03, 2015 at 4:31:59 PM Napoleon V died on the 22nd of Ubu Ut. from bleeding! Scored &C-1175&y points Survived for 320 turns. Visited 3 zones. i 3  G���j �=== Game summary for &WKhrushchev X&y === Game ended Monday, August 03, 2015 at 1:59:50 PM Khrushchev X died on the 9th of Tishru ii Ux. Scored &C-1209&y points Survived for 287 turns. Visited 4 zones. j 4  B���k �=== Game summary for &WKant II&y === Game ended Friday, August 21, 2015 at 11:14:45 PM Kant II died on the 28th of Ubu Ut. The snapjaw hunter&y hits &r(x3)&y for 16 damage with his &wbronze&y two-handed sword ->&y4 &r&y1d8&y!&y [13] Scored &C-1214&y points Survived for 145 turns. Visited 4 zones. k 5  "���l �=== Game summary for &WNietzsche II&y === Game ended Tuesday, August 04, 2015 at 8:25:57 PM Nietzsche II died on the 9th of Simmun Ut. The &rbloody&y &gjilted lover&y hits &W(x2)&y for 3 damage with his thorns ->&y5 &r&y1d4&y!&y [19] Scored &C-1246&y points Survived for 136 turns. Visited 3 zones. l 6  ���m �=== Game summary for &WKant III&y === Game ended Friday, August 21, 2015 at 11:25:01 PM Kant III died on the 13th of Kisu Ux. from bleeding! Scored &C-1252&y points Survived for 105 turns. Visited 3 zones. m 7  ���n �=== Game summary for &WGoethe &y === Game ended Sunday, August 09, 2015 at 7:43:13 PM Goethe died on the 22nd of Tuum Ut. The &rbloody&y boar&y hits &W(x2)&y for 6 damage with his bite ->&y7 &r&y1d3&y!&y [12] Scored &C-1253&y points Survived for 121 turns. Visited 3 zones. n 8  ����o �=== Game summary for &WMalenkov&y === Game ended Sunday, August 02, 2015 at 1:34:01 PM Malenkov died on the 9th of Tishru ii Ux. from &ctraipsing mortar&y's explosion! Scored &C-1318&y points Survived for 130 turns. Visited 5 zones. o 9  ����p �=== Game summary for &WKhrushchev III&y === Game ended Sunday, August 02, 2015 at 4:19:46 PM Khrushchev III died on the 1st of Nivvun Ut. from the scalding steam! Scored &C-1351&y points Survived for 324 turns. Visited 4 zones. p :  U���q �=== Game summary for &W &y === Game ended Friday, August 21, 2015 at 11:25:56 PM died on the 8th of Nivvun Ut. from Warden Ualraig&y's Freezes! Scored &C-1451&y points Survived for 19 turns. Visited 1 zone. q ;  ����r �=== Game summary for &WNapoleon&y === Game ended Monday, August 03, 2015 at 3:23:01 PM Napoleon died on the 18th of Tuum Ut. Abandoned all hope. Scored &C-1588&y points Survived for 95 turns. Visited 1 zone. r ###Markdown There is still a number of symbols that can not be read. I have found that the following two blocks work in getting rid of these. If anyone has any better solution please suggest it. ###Code import codecs qud = codecs.open("HighScores.txt", encoding='latin-1') #open and encode as latin-1 clean_qud = open("HighScores_clean.txt", "w") #open file to save to for line in qud: line = line.encode('utf8') line = line.decode('unicode_escape').encode('ascii','ignore') clean_qud.write(line) clean_qud.close() ###Output _____no_output_____ ###Markdown If you open both HighScores.txt and HighScores_clean.txt you will see that a number of the unreadable symbols have been removed and that the clean text is now much more readable.Next, the real cleaning begins. This will be done in two major steps. First I will completely remove all unreadable text and save this as a human readable file. Then I will use this cleaned text to create a file which will fill in missing values and add seperators between each "column" so it can be read into pandas.For the first step a list of tags will have to be removed such as &y, &r etc. These tags seem to determine the color of the next word or symbol on the highscores screen and all need to be removed. Also, sometimes a highscore does not contain a description of how the character died. We need to be able to determine between a blank line where this description should be and a blank line which occurs between highscores. ###Code import re #list of text to remove. remove_list = ["&W", "&y", "&w", "&r", "&M", "&Y", "&C", "&c", "&b", "&B", "&K", "&R", "&W", "&G", "&g", "\r", "\n", "\t"] cleaned_qud = open("Cleaned_Qud_HighScores.txt", "w") #file to write to clean_highscores = open("HighScores_clean.txt", "r") #flag which will be used to determine if there is a blank line in the data instead of a line describing how the character died. #this represents if we have reached a line that says "Visited x Zones" which is always present and always occurs after the character death description visited = False #flag which will be used to determine if this is the first line in the file first_line = True for line in clean_highscores.readlines(): line = line.replace("`", "'").replace("\n", " ").strip() #some lines have a different ' which was causing havoc! Remove all linebreaks, strip away all whitespace for remove_word in remove_list: line = line.replace(remove_word, "") #go down through all words in the remove list and replace them with "" if "===" in line: #check if this is the first line of a highscore (===Game summary for ) visited = False #set the visited variable to false qud_search = re.search("=== ((\w*\'*\w*\s*)*) ===", line) #pull out everything between === and === if first_line == True: line = str(qud_search.group(1)) first_line = False else: line = "\n" + str(qud_search.group(1)) #if this is the first line in the file write as is, otherwise put a \n at the start. Prevents a blank line at the start of the file if len(line) == 0 and visited == False: #if a line is blank and we haven't hit the end of the highscore this is where a death description should be line = "blank" if "Visited" in line: visited = True #set to true to indicate we have passed the death description. Blank lines after this will be striped out #Even after all the cleaning some unwanted symbols were still getting through. The following line works, but is messy. But works. Did I mention it works?...well, it works so far... #If we have passed the death description (visited == True) any file striped of ALL spaces, even those between words, that is less than 10 letters can be assumed to be trash that has made it through the cleaning process. Delete. if visited == True: if len(line.replace(" ", "")) < 10: #continue print line #print out the line cleaned_qud.write(line + "\n") #write the line cleaned_qud.close() ###Output Game summary for Goethe II Game ended Thursday, August 13, 2015 at 6:04:58 PM Goethe II died on the 20th of Uru Ux. from Wahmahcalcalit's lase beam! (x2) Scored 48753 points Survived for 35235 turns. Visited 260 zones. Generated 1 storied items. Most advanced artifact in possession: HE Missile Game summary for Kant XVIII Game ended Sunday, August 30, 2015 at 7:34:00 PM Kant XVIII died on the 27th of Tuum Ut. The chute crab hits (x1) for 2 damage with his crab claw ->7 1d2! [7] Scored 40178 points Survived for 37145 turns. Visited 222 zones. Generated 1 storied items. Most advanced artifact in possession: Fix-It spray foam x2 Game summary for O'Brien III Game ended Wednesday, September 02, 2015 at 3:50:10 AM O'Brien III died on the 6th of Tebet Ux. The bloody Kumukokumu the Stylish, legendary ogre ape hits (x8) for 51 damage with his ape fist ->20 3d3! [10] Scored 20556 points Survived for 21114 turns. Visited 130 zones. Most advanced artifact in possession: force bracelet 0 0 <> [no cell] Game summary for Kant XII Game ended Friday, August 28, 2015 at 11:14:48 PM Kant XII died on the 7th of Iyur Ut. Putus Templar warden hits (x1) for 3 damage with his folded carbide long sword ->9 2d5! [20] Scored 17061 points Survived for 17066 turns. Visited 118 zones. Most advanced artifact in possession: electrobow <> ->10 1d6 [no cell] Game summary for Nietzsche III Game ended Wednesday, August 05, 2015 at 8:00:46 PM Nietzsche III died on the 19th of Tishru ii Ux. The eyeless king crab hits (x6) for 20 damage with his massive king crab claw ->20 1d6! [11] Scored 16607 points Survived for 16124 turns. Visited 115 zones. Most advanced artifact in possession: ubernostrum injector <> Game summary for Kant XI Game ended Thursday, August 27, 2015 at 11:14:09 PM Kant XI died on the 12th of Uru Ux. from the scalding steam! Scored 9971 points Survived for 11105 turns. Visited 60 zones. Most advanced artifact in possession: Fix-It spray foam Game summary for Kant VIII Game ended Thursday, August 27, 2015 at 8:01:17 PM Kant VIII died on the 23rd of Tishru i Ux. from the scalding steam! Scored 7848 points Survived for 7719 turns. Visited 35 zones. Most advanced artifact in possession: Fix-It spray foam Game summary for Kant V Game ended Saturday, August 22, 2015 at 6:22:36 PM Kant V died on the 28th of Tishru i Ux. blank Scored 6662 points Survived for 8118 turns. Visited 47 zones. Most advanced artifact in possession: semi-automatic pistol x2 <> ->8 1d6 [Empty] Game summary for O'Brien II Game ended Wednesday, September 02, 2015 at 12:53:32 AM O'Brien II died on the 23rd of Nivvun Ut. from Duhmahcaluhcal's lase beam! (x3) Scored 5471 points Survived for 6356 turns. Visited 42 zones. Most advanced artifact in possession: stun gas grenade mk I Game summary for Kant XVII Game ended Saturday, August 29, 2015 at 1:44:21 AM Kant XVII died on the 27th of Tishru ii Ux. blank Scored 3169 points Survived for 7239 turns. Visited 37 zones. Most advanced artifact in possession: compass bracelet 0 0 Game summary for Kant Game ended Friday, August 21, 2015 at 11:08:22 PM Kant died on the 12th of Tuum Ut. The giant amoeba hits (x2) for 4 damage with his giant pseudopod ->10 1d3! [15] Scored 2372 points Survived for 3013 turns. Visited 21 zones. Game summary for Kant XIII Game ended Saturday, August 29, 2015 at 12:05:24 AM Kant XIII died on the 5th of Tebet Ux. The girshling hits (x1) for 6 damage with his claw ->2 1d6! [12] Scored 1849 points Survived for 2853 turns. Visited 18 zones. Most advanced artifact in possession: pump shotgun ->8 1d2 [shotgun shell] Game summary for Khrushchev VIII Game ended Sunday, August 02, 2015 at 6:19:47 PM Khrushchev VIII died on the 5th of Tuum Ut. from bleeding! Scored 1760 points Survived for 4198 turns. Visited 25 zones. Game summary for Nietzsche Game ended Tuesday, August 04, 2015 at 8:18:00 PM Nietzsche died on the 17th of Tuum Ut. The bloody horned chameleon hits (x1) for 4 damage with his Tusks ->4 2d3 0 0! [7] Scored 1503 points Survived for 3719 turns. Visited 27 zones. Game summary for Kant IV Game ended Saturday, August 22, 2015 at 12:00:38 AM Kant IV died on the 28th of Iyur Ut. The snapjaw scavenger hits (x1) for 6 damage with his steel battle axe ->3 1d6! [18] Scored 1393 points Survived for 3347 turns. Visited 16 zones. Most advanced artifact in possession: acid gas grenade mk I Game summary for Khrushchev XI Game ended Monday, August 03, 2015 at 3:18:32 PM Khrushchev XI died on the 20th of Kisu Ux. blank Scored 1136 points Survived for 4121 turns. Visited 21 zones. Most advanced artifact in possession: blaze injector Game summary for Lenin Game ended Saturday, August 01, 2015 at 4:08:30 PM Lenin died on the 13th of Iyur Ut. The equimax hits (x3) for 9 damage with his bite ->8 2d2! [17] Scored 902 points Survived for 1019 turns. Visited 12 zones. Game summary for Khrushchev Game ended Sunday, August 02, 2015 at 3:52:52 PM Khrushchev died on the 10th of Tishru ii Ux. The bloody Groubuubu-wof-wofuz, the stalwart Snapjaw Tot-eater hits (x2) for 11 damage with his carbide battle axe ->5 2d3! [22] Scored 708 points Survived for 2789 turns. Visited 21 zones. Game summary for Kant X Game ended Thursday, August 27, 2015 at 9:17:34 PM Kant X died on the 8th of Kisu Ux. The cave spider hits (x1) for 2 damage with his fangs ->2 1d2! [19] Scored 265 points Survived for 2287 turns. Visited 16 zones. Game summary for O'Brien IV Game ended Wednesday, September 02, 2015 at 12:47:21 PM O'Brien IV died on the Ides of Uru Ux. The salthopper hits (x3) for 7 damage with his rending mandibles ->11 1d4! [15] Scored 199 points Survived for 1810 turns. Visited 14 zones. Most advanced artifact in possession: semi-automatic pistol ->8 1d6 [Empty] Game summary for Khrushchev II Game ended Sunday, August 02, 2015 at 4:13:32 PM Khrushchev II died on the 17th of Uru Ux. from bleeding! Scored 13 points Survived for 1640 turns. Visited 9 zones. Game summary for Stalin Game ended Saturday, August 01, 2015 at 3:28:05 PM Stalin died on the 27th of Uru Ux. The salthopper hits (x3) for 9 damage with his rending mandibles ->11 1d4! [16] Scored -71 points Survived for 3412 turns. Visited 30 zones. Game summary for Kant IX Game ended Thursday, August 27, 2015 at 8:51:15 PM Kant IX died on the 2nd of Iyur Ut. The jilted lover hits (x1) for 2 damage with his thorns ->5 1d4! [16] Scored -80 points Survived for 1807 turns. Visited 13 zones. Most advanced artifact in possession: acid gas grenade mk I x2 Game summary for Khrushchev VI Game ended Sunday, August 02, 2015 at 5:12:49 PM Khrushchev VI died on the 11th of Tuum Ut. from fire ant's flames! Scored -126 points Survived for 1705 turns. Visited 21 zones. Game summary for Stalin Game ended Saturday, August 01, 2015 at 3:41:26 PM Stalin died on the 1st of Tishru i Ux. from the explosion! Scored -224 points Survived for 1061 turns. Visited 8 zones. Game summary for Kant VII Game ended Thursday, August 27, 2015 at 4:50:50 PM Kant VII died on the 22nd of Uulu Ut. blank Scored -243 points Survived for 770 turns. Visited 8 zones. Game summary for Kant XIV Game ended Saturday, August 29, 2015 at 12:23:09 AM Kant XIV died on the 29th of Tuum Ut. from bleeding! Scored -367 points Survived for 2230 turns. Visited 14 zones. Game summary for Kant XV Game ended Saturday, August 29, 2015 at 12:42:44 AM Kant XV died on the 24th of Nivvun Ut. blank Scored -434 points Survived for 1332 turns. Visited 10 zones. Game summary for Khrushchev Game ended Sunday, August 02, 2015 at 2:13:37 PM Khrushchev died on the 13th of Uru Ux. from bleeding! Scored -531 points Survived for 1863 turns. Visited 17 zones. Game summary for Napolen III Game ended Monday, August 03, 2015 at 4:20:16 PM Napolen III died on the 20th of Shwut Ux. from the fire started by dawnglider! Scored -543 points Survived for 1773 turns. Visited 16 zones. Game summary for Napoleon Game ended Monday, August 03, 2015 at 3:41:37 PM Napoleon died on the 19th of Uulu Ut. from bleeding! Scored -547 points Survived for 1536 turns. Visited 11 zones. Game summary for Napoleon II Game ended Monday, August 03, 2015 at 3:57:39 PM Napoleon II died on the 22nd of Tishru i Ux. from bleeding! Scored -590 points Survived for 1298 turns. Visited 11 zones. Game summary for O'Brien V Game ended Wednesday, September 02, 2015 at 12:58:51 PM O'Brien V died on the 17th of Nivvun Ut. blank Scored -618 points Survived for 1538 turns. Visited 14 zones. Game summary for Kant VI Game ended Thursday, August 27, 2015 at 4:36:27 PM Kant VI died on the 14th of Shwut Ux. from young ivory's impalement. Scored -911 points Survived for 613 turns. Visited 7 zones. Game summary for Khrushchev V Game ended Sunday, August 02, 2015 at 4:45:07 PM Khrushchev V died on the 19th of Uulu Ut. The bloody salthopper hits (x4) for 10 damage with his rending mandibles ->10 1d4! [8] Scored -918 points Survived for 510 turns. Visited 6 zones. Game summary for Khrushchev IV Game ended Sunday, August 02, 2015 at 4:35:26 PM Khrushchev IV died on the 21st of Nivvun Ut. The snapjaw scavenger hits (x1) for 1 damage with his iron dagger ->2 1d4! [8] Scored -936 points Survived for 1134 turns. Visited 12 zones. Game summary for Kant II Game ended Friday, August 21, 2015 at 11:23:56 PM Kant II died on the 12th of Shwut Ux. The scrap shoveler hits (x5) for 8 damage with his scrap shovel ->15 1d2! [5] Scored -986 points Survived for 720 turns. Visited 6 zones. Game summary for Malenkov Game ended Saturday, August 01, 2015 at 4:35:17 PM Malenkov died on the 14th of Tuum Ut. from snapjaw scavenger's explosion! Scored -1004 points Survived for 911 turns. Visited 12 zones. Game summary for Kant XVI Game ended Saturday, August 29, 2015 at 12:46:03 AM Kant XVI died on the 3rd of Uulu Ut. The bloody equimax hits (x2) for 5 damage with his bite ->9 2d2! [23] Scored -1120 points Survived for 401 turns. Visited 5 zones. Game summary for Khrushchev VII Game ended Sunday, August 02, 2015 at 5:17:56 PM Khrushchev VII died on the 27th of Uru Ux. The bloody salamander hits (x1) for 3 damage with his bite ->3 1d3! [8] Scored -1127 points Survived for 403 turns. Visited 5 zones. Game summary for Stalin Game ended Saturday, August 01, 2015 at 2:04:38 PM Stalin died on the 20th of Nivvun Ut. Umchuum hits (x2) for 4 damage with his Umumerchacal! [9] Scored -1131 points Survived for 362 turns. Visited 3 zones. Most advanced artifact in possession: poison gas grenade mk I Game summary for Khrushchev IX Game ended Monday, August 03, 2015 at 1:56:40 PM Khrushchev IX died on the 18th of Tishru i Ux. from bleeding! Scored -1143 points Survived for 336 turns. Visited 4 zones. Game summary for Malenkov Game ended Saturday, August 01, 2015 at 4:18:54 PM Malenkov died on the 13th of Tishru i Ux. Ruf-ohoubub, the stalwart Snapjaw Bear-baiter hits (x2) for 10 damage with his bronze two-handed sword ->4 1d8! [10] Scored -1155 points Survived for 687 turns. Visited 8 zones. Game summary for Napoleon IV Game ended Monday, August 03, 2015 at 4:28:33 PM Napoleon IV died on the 25th of Uulu Ut. The salthopper hits (x2) for 5 damage with his rending mandibles ->10 1d4! [11] Scored -1171 points Survived for 275 turns. Visited 4 zones. Game summary for O'Brien Game ended Wednesday, September 02, 2015 at 12:03:29 AM O'Brien died on the 9th of Nivvun Ut. blank Scored -1174 points Survived for 287 turns. Visited 5 zones. Game summary for Napoleon V Game ended Monday, August 03, 2015 at 4:31:59 PM Napoleon V died on the 22nd of Ubu Ut. from bleeding! Scored -1175 points Survived for 320 turns. Visited 3 zones. Game summary for Khrushchev X Game ended Monday, August 03, 2015 at 1:59:50 PM Khrushchev X died on the 9th of Tishru ii Ux. blank Scored -1209 points Survived for 287 turns. Visited 4 zones. Game summary for Kant II Game ended Friday, August 21, 2015 at 11:14:45 PM Kant II died on the 28th of Ubu Ut. The snapjaw hunter hits (x3) for 16 damage with his bronze two-handed sword ->4 1d8! [13] Scored -1214 points Survived for 145 turns. Visited 4 zones. Game summary for Nietzsche II Game ended Tuesday, August 04, 2015 at 8:25:57 PM Nietzsche II died on the 9th of Simmun Ut. The bloody jilted lover hits (x2) for 3 damage with his thorns ->5 1d4! [19] Scored -1246 points Survived for 136 turns. Visited 3 zones. Game summary for Kant III Game ended Friday, August 21, 2015 at 11:25:01 PM Kant III died on the 13th of Kisu Ux. from bleeding! Scored -1252 points Survived for 105 turns. Visited 3 zones. Game summary for Goethe Game ended Sunday, August 09, 2015 at 7:43:13 PM Goethe died on the 22nd of Tuum Ut. The bloody boar hits (x2) for 6 damage with his bite ->7 1d3! [12] Scored -1253 points Survived for 121 turns. Visited 3 zones. Game summary for Malenkov Game ended Sunday, August 02, 2015 at 1:34:01 PM Malenkov died on the 9th of Tishru ii Ux. from traipsing mortar's explosion! Scored -1318 points Survived for 130 turns. Visited 5 zones. Game summary for Khrushchev III Game ended Sunday, August 02, 2015 at 4:19:46 PM Khrushchev III died on the 1st of Nivvun Ut. from the scalding steam! Scored -1351 points Survived for 324 turns. Visited 4 zones. Game summary for Game ended Friday, August 21, 2015 at 11:25:56 PM died on the 8th of Nivvun Ut. from Warden Ualraig's Freezes! Scored -1451 points Survived for 19 turns. Visited 1 zone. Game summary for Napoleon Game ended Monday, August 03, 2015 at 3:23:01 PM Napoleon died on the 18th of Tuum Ut. Abandoned all hope. Scored -1588 points Survived for 95 turns. Visited 1 zone. ###Markdown Wow, that was tough and we're still not near Golgotha. This human readable file created above will now be used to create a pandas readable file. This could have all been done in one step but is done in two for my sanity, which I was in danger of losing during the above process and also in the event a user would rather change the below step to clean the file in a different way.Now we need to delete a lot of the filler text ("Game summary for ", "x died on the " etc) so that we are just left with catagorial (character name, artifact name) or integer values (score, zones).There is also the issue of uneven or unequal highscore descriptions. Some of them contain data that the others do not. If I found a "storied item" (I remember finding a shield called "Stopslavin") then a row "Generated 1 storied items." will be added. However, if I do not find a storied item then this line will not be there. Same with artifacts. So it is possible that some scores will have (at least) two lines more than other scores and a number of flags are used to check this.If going through this code using your own highscores data you will more than likely have to make adjustments/additions to the lines determining how the character died. ###Code import re #left behind as I often started the the notebook from this point, content with the cleaning in the above step from earlier cleaned_qud = open("Cleaned_Qud_HighScores_1.txt", "w") clean_highscores = open("Cleaned_Qud_HighScores.txt", "r") first_line = True name = " " #flags for checking if storied items or artifacts are present in the highscore visited = False generated = False artifact = False for line in clean_highscores.readlines(): line = line.replace("`", "'").replace("\n", " ").replace(".", "").strip() if "summary" in line: #If this is the first line of a highscore visited = False generated = False #set all flags to false artifact = False line = line.replace("Game summary for ", "") #remove everything but the characters name name = line #save the characters name to be used in a later deletion ("name died on ") line = line.strip() #strip blank space. This is from an attempt to parse a line where the character name was " " if first_line == True: first_line = False else: line = "\n"+str(line) #If this is the first line saved to the file add as is, other wise add a \n to the start if "Game ended" in line: line = line.replace("Game ended", "").replace("at", "").strip() #Remove "Game ended", leaving behind only the date if "died on" in line: line = line.replace("%s died on the" % name, "").strip() #remove "name died on the " leaving behind only the Game date #Code to figure out what caused the players death if " hits (" in line: #The chute crab hits (x1) for 2 damage with his crab claw ->7 1d2! [7] if "->" in line: death_search = re.search("((\w*\,?\-?\s*)+) hits \(x(\d*)\) for (\d*) damage with \w{3} ((\w*\,?\-?\s*)+) ->(\d+) (\d*d\d*)!?", line.replace("The", "").replace("bloody", "").strip()) #name, times hit, damage, weapon, PV, pos damage line = str(death_search.group(1)) + "\t" + str(death_search.group(3)) + "\t" + str(death_search.group(4)) + "\t" + str(death_search.group(5)) + "\t" + str(death_search.group(7)) + "\t" + str(death_search.group(8)) else: #Umchuum hits (x2) for 4 damage with his Umumerchacal! [9] death_search = re.search("((\w*\,?\-?\s*)+) hits \(x(\d*)\) for (\d*) damage with \w{3} ((\w*\,?\-?\s*)+)!?", line.replace("The", "").replace("bloody", "").strip()) #name, times hit, damage, weapon, PV, pos damage line = str(death_search.group(1)) + "\t" + str(death_search.group(3)) + "\t" + str(death_search.group(4)) + "\t" + str(death_search.group(5)) + "\t0" + "\t0" if "blank" in line: line = "unknown\t0\t0\tunknown\t0\t0" #lines that contain 'from' are generally short descriptions. A more effective regex could be written at a later time. if "from" in line: if line.strip() == "from bleeding!": line = "bleeding\t0\t0\tbleeding\t0\t0" elif line.strip() == "from the scalding steam!": line = "scalding steam\t0\t0\tscalding steam\t0\t0" elif line.strip() == "from the explosion!": line = "explosion\t0\t0\texplosion\t0\t0" elif "from the fire started by" in line: foe = line.replace("from the fire started by ", "").strip("!").strip() line = "%s\t0\t0\tfire\t0\t0" % foe elif "'s" in line: #from Wahmahcalcalit's lase beam! death_search = re.search("from ((\w*\,?\-?\s*)*(\w*\,?\-?(\'s){1}\s*)) ((\w*\,?\-?(\'s){0}\s*)+)", line) line = "%s\t0\t0\t%s\t0\t0\t" % (str(death_search.group(1).strip("'s'")), str(death_search.group(5))) if line.strip() == "Abandoned all hope": line = "quit\t0\t0\tquit\t0\t0" if "Scored" in line: line = line.replace("Scored", "").replace("points", "").strip() #remove all bar the points figure if "Survived " in line: line = line.replace("Survived for", "").replace("turns", "").strip() #remove all bar the turns figure if "Visited " in line: visited = True #set visited flag to true line = line.replace("Visited", "").replace("zones", "").replace("zone", "").strip() #remove all bar the zones figure if "Generated" in line: generated = True #set generated flag to true line = line.replace("Generated", "").replace("storied items", "").strip() #remove all bar the storied items figure if "Most advanced artifact" in line: artifact = True #set artifact flag to true gen_check = "" #create a string for checking if a storied items figure exists if generated == False: gen_check = "0\t" generated = True line = gen_check + str(line.replace("Most advanced artifact in possession:", "").strip()) #if there is an artifact but no storied item this will read "0\t" + artifactname. If there is a storied item this will be "" + artifactname if len(line) == 0 and visited == True: #if we are on a blank line and we have passed the visited line this will add "0 no artifact" to the end of the line if generated == False: line = str(line) + "0\t" if artifact == False: line = str(line) + "no artifact\t" print line cleaned_qud.write(line + "\t") cleaned_qud.write("0\tno artifact") #insert into final row cleaned_qud.close() ###Output Goethe II Thursday, August 13, 2015 6:04:58 PM 20th of Uru Ux Wahmahcalcalit 0 0 lase beam 0 0 48753 35235 260 1 HE Missile Kant XVIII Sunday, August 30, 2015 7:34:00 PM 27th of Tuum Ut chute crab 1 2 crab claw 7 1d2 40178 37145 222 1 Fix-It spray foam x2 O'Brien III Wednesday, September 02, 2015 3:50:10 AM 6th of Tebet Ux Kumukokumu the Stylish, legendary ogre ape 8 51 ape fist 20 3d3 20556 21114 130 0 force bracelet 0 0 <> [no cell] Kant XII Friday, August 28, 2015 11:14:48 PM 7th of Iyur Ut Putus Templar warden 1 3 folded carbide long sword 9 2d5 17061 17066 118 0 electrobow <> ->10 1d6 [no cell] Nietzsche III Wednesday, August 05, 2015 8:00:46 PM 19th of Tishru ii Ux eyeless king crab 6 20 massive king crab claw 20 1d6 16607 16124 115 0 ubernostrum injector <> Kant XI Thursday, August 27, 2015 11:14:09 PM 12th of Uru Ux scalding steam 0 0 scalding steam 0 0 9971 11105 60 0 Fix-It spray foam Kant VIII Thursday, August 27, 2015 8:01:17 PM 23rd of Tishru i Ux scalding steam 0 0 scalding steam 0 0 7848 7719 35 0 Fix-It spray foam Kant V Surday, August 22, 2015 6:22:36 PM 28th of Tishru i Ux unknown 0 0 unknown 0 0 6662 8118 47 0 semi-automatic pistol x2 <> ->8 1d6 [Empty] O'Brien II Wednesday, September 02, 2015 12:53:32 AM 23rd of Nivvun Ut Duhmahcaluhcal 0 0 lase beam 0 0 5471 6356 42 0 stun gas grenade mk I Kant XVII Surday, August 29, 2015 1:44:21 AM 27th of Tishru ii Ux unknown 0 0 unknown 0 0 3169 7239 37 0 compass bracelet 0 0 Kant Friday, August 21, 2015 11:08:22 PM 12th of Tuum Ut giant amoeba 2 4 giant pseudopod 10 1d3 2372 3013 21 0 no artifact Kant XIII Surday, August 29, 2015 12:05:24 AM 5th of Tebet Ux girshling 1 6 claw 2 1d6 1849 2853 18 0 pump shotgun ->8 1d2 [shotgun shell] Khrushchev VIII Sunday, August 02, 2015 6:19:47 PM 5th of Tuum Ut bleeding 0 0 bleeding 0 0 1760 4198 25 0 no artifact Nietzsche Tuesday, August 04, 2015 8:18:00 PM 17th of Tuum Ut horned chameleon 1 4 Tusks 4 2d3 1503 3719 27 0 no artifact Kant IV Surday, August 22, 2015 12:00:38 AM 28th of Iyur Ut snapjaw scavenger 1 6 steel battle axe 3 1d6 1393 3347 16 0 acid gas grenade mk I Khrushchev XI Monday, August 03, 2015 3:18:32 PM Khrushchev XI died on the 20th of Kisu Ux unknown 0 0 unknown 0 0 1136 4121 21 0 blaze injector Lenin Surday, August 01, 2015 4:08:30 PM 13th of Iyur Ut equimax 3 9 bite 8 2d2 902 1019 12 0 no artifact Khrushchev Sunday, August 02, 2015 3:52:52 PM 10th of Tishru ii Ux Groubuubu-wof-wofuz, the stalwart Snapjaw Tot-eater 2 11 carbide battle axe 5 2d3 708 2789 21 0 no artifact Kant X Thursday, August 27, 2015 9:17:34 PM 8th of Kisu Ux cave spider 1 2 fangs 2 1d2 265 2287 16 0 no artifact O'Brien IV Wednesday, September 02, 2015 12:47:21 PM Ides of Uru Ux salthopper 3 7 rending mandibles 11 1d4 199 1810 14 0 semi-automatic pistol ->8 1d6 [Empty] Khrushchev II Sunday, August 02, 2015 4:13:32 PM 17th of Uru Ux bleeding 0 0 bleeding 0 0 13 1640 9 0 no artifact Stalin Surday, August 01, 2015 3:28:05 PM 27th of Uru Ux salthopper 3 9 rending mandibles 11 1d4 -71 3412 30 0 no artifact Kant IX Thursday, August 27, 2015 8:51:15 PM 2nd of Iyur Ut jilted lover 1 2 thorns 5 1d4 -80 1807 13 0 acid gas grenade mk I x2 Khrushchev VI Sunday, August 02, 2015 5:12:49 PM 11th of Tuum Ut fire ant 0 0 flames 0 0 -126 1705 21 0 no artifact Stalin Surday, August 01, 2015 3:41:26 PM 1st of Tishru i Ux explosion 0 0 explosion 0 0 -224 1061 8 0 no artifact Kant VII Thursday, August 27, 2015 4:50:50 PM 22nd of Uulu Ut unknown 0 0 unknown 0 0 -243 770 8 0 no artifact Kant XIV Surday, August 29, 2015 12:23:09 AM 29th of Tuum Ut bleeding 0 0 bleeding 0 0 -367 2230 14 0 no artifact Kant XV Surday, August 29, 2015 12:42:44 AM 24th of Nivvun Ut unknown 0 0 unknown 0 0 -434 1332 10 0 no artifact Khrushchev Sunday, August 02, 2015 2:13:37 PM 13th of Uru Ux bleeding 0 0 bleeding 0 0 -531 1863 17 0 no artifact Napolen III Monday, August 03, 2015 4:20:16 PM 20th of Shwut Ux dawnglider 0 0 fire 0 0 -543 1773 16 0 no artifact Napoleon Monday, August 03, 2015 3:41:37 PM 19th of Uulu Ut bleeding 0 0 bleeding 0 0 -547 1536 11 0 no artifact Napoleon II Monday, August 03, 2015 3:57:39 PM 22nd of Tishru i Ux bleeding 0 0 bleeding 0 0 -590 1298 11 0 no artifact O'Brien V Wednesday, September 02, 2015 12:58:51 PM 17th of Nivvun Ut unknown 0 0 unknown 0 0 -618 1538 14 0 no artifact Kant VI Thursday, August 27, 2015 4:36:27 PM 14th of Shwut Ux young ivory 0 0 impalement 0 0 -911 613 7 0 no artifact Khrushchev V Sunday, August 02, 2015 4:45:07 PM 19th of Uulu Ut salthopper 4 10 rending mandibles 10 1d4 -918 510 6 0 no artifact Khrushchev IV Sunday, August 02, 2015 4:35:26 PM 21st of Nivvun Ut snapjaw scavenger 1 1 iron dagger 2 1d4 -936 1134 12 0 no artifact Kant II Friday, August 21, 2015 11:23:56 PM 12th of Shwut Ux scrap shoveler 5 8 scrap shovel 15 1d2 -986 720 6 0 no artifact Malenkov Surday, August 01, 2015 4:35:17 PM 14th of Tuum Ut napjaw scavenger 0 0 explosion 0 0 -1004 911 12 0 no artifact Kant XVI Surday, August 29, 2015 12:46:03 AM 3rd of Uulu Ut equimax 2 5 bite 9 2d2 -1120 401 5 0 no artifact Khrushchev VII Sunday, August 02, 2015 5:17:56 PM 27th of Uru Ux salamander 1 3 bite 3 1d3 -1127 403 5 0 no artifact Stalin Surday, August 01, 2015 2:04:38 PM 20th of Nivvun Ut Umchuum 2 4 Umumerchacal 0 0 -1131 362 3 0 poison gas grenade mk I Khrushchev IX Monday, August 03, 2015 1:56:40 PM 18th of Tishru i Ux bleeding 0 0 bleeding 0 0 -1143 336 4 0 no artifact Malenkov Surday, August 01, 2015 4:18:54 PM 13th of Tishru i Ux Ruf-ohoubub, the stalwart Snapjaw Bear-baiter 2 10 bronze two-handed sword 4 1d8 -1155 687 8 0 no artifact Napoleon IV Monday, August 03, 2015 4:28:33 PM 25th of Uulu Ut salthopper 2 5 rending mandibles 10 1d4 -1171 275 4 0 no artifact O'Brien Wednesday, September 02, 2015 12:03:29 AM 9th of Nivvun Ut unknown 0 0 unknown 0 0 -1174 287 5 0 no artifact Napoleon V Monday, August 03, 2015 4:31:59 PM 22nd of Ubu Ut bleeding 0 0 bleeding 0 0 -1175 320 3 0 no artifact Khrushchev X Monday, August 03, 2015 1:59:50 PM 9th of Tishru ii Ux unknown 0 0 unknown 0 0 -1209 287 4 0 no artifact Kant II Friday, August 21, 2015 11:14:45 PM 28th of Ubu Ut snapjaw hunter 3 16 bronze two-handed sword 4 1d8 -1214 145 4 0 no artifact Nietzsche II Tuesday, August 04, 2015 8:25:57 PM 9th of Simmun Ut jilted lover 2 3 thorns 5 1d4 -1246 136 3 0 no artifact Kant III Friday, August 21, 2015 11:25:01 PM 13th of Kisu Ux bleeding 0 0 bleeding 0 0 -1252 105 3 0 no artifact Goethe Sunday, August 09, 2015 7:43:13 PM Goethe died on the 22nd of Tuum Ut boar 2 6 bite 7 1d3 -1253 121 3 0 no artifact Malenkov Sunday, August 02, 2015 1:34:01 PM 9th of Tishru ii Ux traipsing mortar 0 0 explosion 0 0 -1318 130 5 0 no artifact Khrushchev III Sunday, August 02, 2015 4:19:46 PM 1st of Nivvun Ut scalding steam 0 0 scalding steam 0 0 -1351 324 4 0 no artifact Game summary for Friday, August 21, 2015 11:25:56 PM died on the 8th of Nivvun Ut Warden Ualraig 0 0 Freezes 0 0 -1451 19 1 0 no artifact Napoleon Monday, August 03, 2015 3:23:01 PM 18th of Tuum Ut quit 0 0 quit 0 0 -1588 95 1
event_scraper/.ipynb_checkpoints/build_datasets-checkpoint.ipynb
###Markdown The goal here is to scrape and prepare completed events. This is going to be broken down into a few steps* Scrape Event (Pay attention to Red vs Blue)* Scrape Odds ###Code import pandas as pd from urllib.request import urlopen from bs4 import BeautifulSoup from datetime import datetime from dateutil.parser import parse #We are going to automate the data gathering for upcoming UFC events. #First let's create an empty DataFrame. #We are going to get the column list from the master-dataframe. This #will guarantee that they are the same. temp_df = pd.read_csv("../data/kaggle_data/ufc-master.csv") column_list = temp_df.columns #Now let's build an empty dataframe with the columns df = pd.DataFrame(columns=column_list) #OK. Now we have a receptacle for the data. That's the hard part #right? #There is an event page and individual fighter pages. #First let's grab all the information we can off of the event page #then we can move to the individual fighter pages. #We are probably going to have to turn this into a loop... is_upcoming = True is_most_recent = False f = 'http://ufcstats.com/event-details/40389d39a92f5bfa' upcoming_odds_page = "https://www.bestfightodds.com/events/ufc-264-poirier-vs-mcgregor-3-2127" html=urlopen(f) bs=BeautifulSoup(html, 'html.parser') ###HERE WE ARE GOING TO GET A LIST did_red_lose_list = [] winner_list = [] fight_links = bs.find_all('a', {'class':'b-flag'}) prev_link = '' if is_upcoming: fight_links = bs.find_all('tr') fight_count = len(fight_links) - 1 for n in range(fight_count): did_red_lose_list.append(True) winner_list.append('Blue') else: for n in range(len(fight_links)): link = (fight_links[n].attrs['href']) if prev_link == link: #This happens in draws pass else: #Go to the page and figure out who won. temp_html = urlopen(link) bs_temp = BeautifulSoup(temp_html, 'html.parser') fight_result = bs_temp.find('i', {'class':'b-fight-details__person-status'}).get_text().strip() if fight_result == 'L': did_red_lose_list.append(True) winner_list.append('Blue') elif fight_result == 'D': did_red_lose_list.append(False) winner_list.append('Draw') elif fight_result == 'NC': did_red_lose_list.append(False) winner_list.append('No Contest') else: did_red_lose_list.append(False) winner_list.append('Red') #print(fight_result.get_text()) prev_link = link print(did_red_lose_list) #This is dumb, but it's this or refactor some code #A list of if red lost. If red lost the red and blue fighters are going to be backwards #Heavy lifting? fights = bs.find_all('td', {'class':'b-fight-details__table-col l-page_align_left'}) #print (len(fights)) f_count = 0 fighters_raw = [] weight_classes_raw = [] #Each fight is split into 3 cells #The first has information about the fighters. Their names and links #The 2nd has the weight class of the fight #The 3rd is junk for f in fights: if f_count%3 == 0 : fighters_raw.append(f) if f_count%3 == 1: weight_classes_raw.append(f) f_count=f_count+1 #These lists will contain the fighter and a link red_fighter_list = [] blue_fighter_list = [] weight_class_list = [] if is_most_recent: for f in fighters_raw: temp_fighters = f.find_all('p') temp_links = f.find_all('a') #print("Red fighter: ", temp_fighters[0].get_text().strip()) #print("Red Link: ", temp_links[0].attrs['href']) #print("Blue Fighter:", temp_fighters[1].get_text().strip()) #print("Blue Link: ", temp_links[1].attrs['href']) red_fighter_list.append([temp_fighters[0].get_text().strip(), temp_links[0].attrs['href']]) blue_fighter_list.append([temp_fighters[1].get_text().strip(), temp_links[1].attrs['href']]) else: for f in fighters_raw: temp_fighters = f.find_all('p') temp_links = f.find_all('a') #print("Red fighter: ", temp_fighters[0].get_text().strip()) #print("Red Link: ", temp_links[0].attrs['href']) #print("Blue Fighter:", temp_fighters[1].get_text().strip()) #print("Blue Link: ", temp_links[1].attrs['href']) blue_fighter_list.append([temp_fighters[0].get_text().strip(), temp_links[0].attrs['href']]) red_fighter_list.append([temp_fighters[1].get_text().strip(), temp_links[1].attrs['href']]) for w in weight_classes_raw: temp_wc = w.find_all('p') weight_class_list.append(temp_wc[0].get_text().strip()) #print(weight_class_list) #print(bs) #print(red_fighter_list[0][1]) #print(blue_fighter_list) ################################################################### #Insert R_fighter and B_fighter #Let's start entering data into the dataframe! for i in range(len(red_fighter_list)): if did_red_lose_list[i]: df_temp = pd.DataFrame({'R_fighter': blue_fighter_list[i][0], 'B_fighter': red_fighter_list[i][0]}, index=[i]) else: df_temp = pd.DataFrame({'R_fighter': red_fighter_list[i][0], 'B_fighter': blue_fighter_list[i][0]}, index=[i]) #print(df_temp) #print(df_temp) df = pd.concat([df, df_temp]) #display(df) ################################################################## #Let's get the date, and location date_raw = bs.find_all('li', {'class':'b-list__box-list-item'}) child_count=0 for dr in date_raw: temp_count=0 for child in dr.children: #print(child.string) #print(temp_count, child_count) if ((temp_count == 2) & (child_count == 0)): raw_date = (child.string.strip()) if ((temp_count == 2) & (child_count == 1)): location = (child.string.strip()) temp_count = temp_count+1 child_count = child_count+1 formatted_date = datetime.strptime(raw_date, "%B %d, %Y") date_datetime = formatted_date #The pound sign removes the leading 0. formatted_date=(formatted_date.strftime("%#m/%e/%Y")) df['date'] = formatted_date df['location'] = location ################################################################# #Let's get the country split_location = location.split(',') country = split_location[len(split_location)-1] #print(country.strip()) country=country.strip() df['country'] = country ################################################################## #We can use the ################################################################# #Set weight class #weight_classes = bs.find_all('p', {'class':'b-fight-details__table-text'}) #weight_class_list = [] #temp_count=0 #for wc in weight_classes: # #print(temp_count) # if((temp_count+5)%10==0): # weight_class_list.append(wc.get_text().strip()) # temp_count += 1 # #print(weight_class_list) df['weight_class']=weight_class_list ################################################################## #Set title_bout #THIS NEEDS TO BE UPDATED WHEN WE HAVE AN ACTUAL TITLE FIGHT #IT HAS TO DO WITH AN IMAGE NEXT TO THE WEIGHT CLASS. SO WE CAN #TIE THIS INTO HOW WE DETERMINE THE WEIGHT CLASS number_of_fights = len(weight_class_list) title_fight_list = [] title_fight_raw = bs.find_all('tr', {'class':'b-fight-details__table-row'}) skip_row = True for f in title_fight_raw: if skip_row: skip_row = False else: #print(f) f = str(f) #print(f) if f.find('belt.png') > -1: title_fight_list.append(True) else: title_fight_list.append(False) df['title_bout'] = title_fight_list ################################################################## #Set Gender... We can use the weight_class_list for this #How this works is we look at the weight class name. If the first #word is "Women's" we are dealing with a FEMALE fight. Otherwise #MALE gender_list = [] for wc in weight_class_list: if wc.split(' ')[0] == "Women's": gender_list.append('FEMALE') else: gender_list.append('MALE') df['gender'] = gender_list ################################################################## #Determine the number of rounds. First check for title fight. #All title fights are 5 rounds. The main event is also 5 rounds. round_list = [] for z in range(number_of_fights): if(title_fight_list[z]==True): round_list.append(5) else: round_list.append(3) round_list[0] = 5 #print(round_list) df['no_of_rounds'] = round_list ################################################################# ################################################################# #Let's get the finish and finish details if is_most_recent: finish_list = [] finish_details_list = [] temp_list = bs.find_all('td', {'class':'b-fight-details__table-col l-page_align_left'}) print(len(temp_list)) count = 0 for t in temp_list: if (count+1) % 3 == 0: #There are 2 paragraphs here. One with the finish. The other with the temp_finish_list = t.find_all('p') #print(count) #print(t) finish_list.append(temp_finish_list[0].get_text().strip()) finish_details_list.append(temp_finish_list[1].get_text().strip()) count = count+1 finish_round_list = [] time_list = [] temp_list = bs.find_all('td', {'class':'b-fight-details__table-col'}) print(len(temp_list)) count = 0 for t in temp_list: #print(f"COUNT: {count}") #print(t) if (count) % 10 == 8: #There are 2 paragraphs here. One with the finish. The other with the print(f"ROUND: {t.get_text().strip()}") #print(count) #print(t) finish_round_list.append(t.get_text().strip()) #finish_details_list.append(temp_finish_list[1].get_text().strip()) elif (count) % 10 == 9: time_list.append(t.get_text().strip()) count = count+1 ################################################################# ################################################################# ################################################################# #Now we need access to the fighter pages! #First let's save them all so we don't have to constantly access them #REVERT BEFORE GOING LIVE red_count = 0 for f in red_fighter_list: #print(f[1][7:]) html= urlopen(f[1]) bs = BeautifulSoup(html.read(), 'html.parser') with open(f'fighter_pages/r{red_count}.html', "w", encoding='utf-8') as file: file.write(str(bs)) red_count+=1 blue_count = 0 for f in blue_fighter_list: #print(f[1][7:]) html= urlopen(f[1]) bs = BeautifulSoup(html.read(), 'html.parser') with open(f'fighter_pages/b{blue_count}.html', "w", encoding='utf-8') as file: file.write(str(bs)) blue_count+=1 #Find the current lose and win streaks blue_fighter_win_streak = [] blue_fighter_lose_streak = [] red_fighter_win_streak = [] red_fighter_lose_streak = [] blue_draw_list = [] red_draw_list = [] blue_strike_list = [] red_strike_list = [] blue_strike_acc_list = [] red_strike_acc_list = [] sub_list = [] td_list = [] red_sub_list = [] red_td_list = [] td_acc_list = [] red_td_acc_list = [] red_fighter_longest_win_streak = [] blue_fighter_longest_win_streak = [] blue_total_losses = [] red_total_losses = [] blue_total_rounds = [] red_total_rounds = [] blue_title_bouts = [] red_title_bouts = [] blue_total_maj_dec = [] red_total_maj_dec = [] blue_total_split_dec = [] red_total_split_dec = [] blue_total_un_dec = [] red_total_un_dec = [] blue_total_ko = [] red_total_ko = [] blue_total_sub = [] red_total_sub = [] blue_total_wins = [] red_total_wins = [] stance_list = [] height_list = [] reach_list = [] weight_list = [] red_stance_list = [] red_height_list = [] red_reach_list = [] red_weight_list = [] blue_age_list = [] red_age_list = [] z = 0 for z in range(number_of_fights): #print("new fight") #print(did_red_lose_list[z]) #If red lost these are flipped if(did_red_lose_list[z]): b_fighter_file=open(f'fighter_pages/r{z}.html', "r") else: b_fighter_file=open(f'fighter_pages/b{z}.html', "r") blue_soup=BeautifulSoup(b_fighter_file.read(), 'html.parser') ###We need to deal with removing historic fights ###Maybe just make a date list???? blue_results_raw = blue_soup.find_all('i',{'class':'b-flag__text'}) blue_rounds_raw = blue_soup.find_all('p', {'class':'b-fight-details__table-text'}) #print(blue_rounds_raw) ################################################################ #Blue Total rounds fought #Round totals are on 21, 38, 55, 72... etc... #So that is (count - 4) % 17 = 0 #We need to redo this whole thing. blue_rounds_raw = blue_soup.find_all('tr', {'class':'b-fight-details__table-row'}) #print(f"Fight rows: {len(blue_fight_dates_raw)}") blue_round_count = 0 for row_temp in blue_rounds_raw: pos_dates = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(pos_dates) > 16: pos_date = (pos_dates[12].get_text().strip()) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(pos_date) if fight_date_parsed < event_date_parsed: blue_round_count = blue_round_count + int(pos_dates[15].get_text().strip()) blue_total_rounds.append(blue_round_count) ################################################################ #Test to find fight date dates_list = [] dates_list_red = [] blue_fight_dates_raw = blue_soup.find_all('tr', {'class':'b-fight-details__table-row'}) #print(f"Fight rows: {len(blue_fight_dates_raw)}") for row_temp in blue_fight_dates_raw: pos_dates = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(pos_dates) > 16: dates_list.append(pos_dates[12].get_text().strip()) ############################################################### #Blue total title bouts. We are looking for 'belt.png' title_bout_count = 0 #print(blue_soup) title_bout_count = str(blue_soup).count('belt.png') #print(title_bout_count) #If the upcoming fight is a title bout we need to subtract 1 if(df.iloc[z]['title_bout']): title_bout_count -= 1 blue_title_bouts.append(title_bout_count) ############################################################### ################################################################ #Determine the type of win for BLUE temp_count = 0 for b in blue_rounds_raw: #print(temp_count) #print(b.get_text()) temp_count+=1 #OK so it lists win or loss at 6, 23, 40...etc.... #it lists type of win at 19, 36, 53, ....etc... temp_count=0 dec_maj_count = 0 dec_split_count = 0 dec_un_count = 0 ko_count = 0 sub_count = 0 win_flag = False #Set to true when we have a win for row_temp in blue_rounds_raw: cols_method = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(cols_method) > 16: pos_date = (cols_method[12].get_text().strip()) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(pos_date) if fight_date_parsed < event_date_parsed: b = (cols_method[13]) pos_flag = (cols_method[0].get_text().strip()) if(pos_flag) == 'win': win_flag = True else: win_flag = False #Now we are going to look at the win_flag. If it's #true we can tally the method if (win_flag == True): if(b.get_text().strip())=='M-DEC': dec_maj_count += 1 elif(b.get_text().strip())=='S-DEC': dec_split_count +=1 elif(b.get_text().strip())=='U-DEC': dec_un_count += 1 elif(b.get_text().strip())=='KO/TKO': ko_count += 1 elif(b.get_text().strip())=='SUB': sub_count += 1 temp_count+=1 blue_total_maj_dec.append(dec_maj_count) blue_total_split_dec.append(dec_split_count) blue_total_un_dec.append(dec_un_count) blue_total_ko.append(ko_count) blue_total_sub.append(sub_count) #if (temp_count - 4) % 17 == 0: # #print(b.get_text().strip()) # round_raw = b.get_text() # round_stripped = round_raw.strip() # round_count+=int(round_stripped) # #print(round_count) # temp_count+=1 #blue_total_rounds.append(round_count) ################################################################ win_streak = 0 lose_streak =0 draw_count=0 end_streak = False #Set to true when the streak is over #print(dates_list) longest_win_streak = 0 temp_win_streak = 0 total_losses=0 total_wins=0 for r in blue_results_raw: r=r.get_text() if r != 'next': d = dates_list.pop(0) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(d) if fight_date_parsed < event_date_parsed: #print(f"{fight_date_parsed} is earlier than {event_date_parsed}") #print(r) if r=='draw': draw_count+=1 if end_streak == False: if r=='next': #Usually the first line. Just skip pass elif r=='win': if (win_streak>0): win_streak+=1 elif(win_streak==0 and lose_streak==0): win_streak+=1 else: end_streak = True elif r=='loss': if (lose_streak>0): lose_streak+=1 elif(win_streak==0 and lose_streak==0): lose_streak+=1 else: end_streak=True b = r if b=='draw': if temp_win_streak > longest_win_streak: longest_win_streak = temp_win_streak temp_win_streak = 0 if b=='win': temp_win_streak += 1 total_wins+=1 elif b=='loss': temp_win_streak = 0 total_losses+=1 if temp_win_streak > longest_win_streak: longest_win_streak = temp_win_streak #print(r) #print(f"Win Streak: {win_streak}. Lose streak: {lose_streak}") blue_fighter_win_streak.append(win_streak) blue_fighter_lose_streak.append(lose_streak) blue_draw_list.append(draw_count) blue_fighter_longest_win_streak.append(longest_win_streak) blue_total_losses.append(total_losses) blue_total_wins.append(total_wins) if did_red_lose_list[z]: r_fighter_file=open(f'fighter_pages/b{z}.html', "r") else: r_fighter_file=open(f'fighter_pages/r{z}.html', "r") red_soup=BeautifulSoup(r_fighter_file.read(), 'html.parser') red_results_raw = red_soup.find_all('i',{'class':'b-flag__text'}) red_rounds_raw = red_soup.find_all('p', {'class':'b-fight-details__table-text'}) ################################################################ #Red Total rounds fought #Round totals are on 21, 38, 55, 72... etc... #So that is (count - 4) % 17 = 0 red_rounds_raw = red_soup.find_all('tr', {'class':'b-fight-details__table-row'}) #print(f"Fight rows: {len(blue_fight_dates_raw)}") red_round_count = 0 for row_temp in red_rounds_raw: pos_dates = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(pos_dates) > 16: pos_date = (pos_dates[12].get_text().strip()) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(pos_date) if fight_date_parsed < event_date_parsed: red_round_count = red_round_count + int(pos_dates[15].get_text().strip()) red_total_rounds.append(red_round_count) ################################################################ red_fight_dates_raw = red_soup.find_all('tr', {'class':'b-fight-details__table-row'}) #print(f"Fight rows: {len(red_fight_dates_raw)}") for row_temp in red_fight_dates_raw: pos_dates = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(pos_dates) > 16: dates_list_red.append(pos_dates[12].get_text().strip()) ############################################################### #Red total title bouts. We are looking for 'belt.png' title_bout_count = 0 #print(blue_soup) title_bout_count = str(red_soup).count('belt.png') #print(title_bout_count) #If the upcoming fight is a title bout we need to subtract 1 if(df.iloc[z]['title_bout']): title_bout_count -= 1 red_title_bouts.append(title_bout_count) ############################################################### ################################################################ #Determine the type of win for BLUE temp_count = 0 #OK so it lists win or loss at 6, 23, 40...etc.... #it lists type of win at 19, 36, 53, ....etc... temp_count=0 dec_maj_count = 0 dec_split_count = 0 dec_un_count = 0 ko_count = 0 sub_count = 0 win_flag = False #Set to true when we have a win for row_temp in red_rounds_raw: cols_method = row_temp.find_all('p', {'class': 'b-fight-details__table-text'}) if len(cols_method) > 16: pos_date = (cols_method[12].get_text().strip()) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(pos_date) if fight_date_parsed < event_date_parsed: b = (cols_method[13]) pos_flag = (cols_method[0].get_text().strip()) if(pos_flag) == 'win': win_flag = True else: win_flag = False #Now we are going to look at the win_flag. If it's #true we can tally the method if (win_flag == True): if(b.get_text().strip())=='M-DEC': dec_maj_count += 1 elif(b.get_text().strip())=='S-DEC': dec_split_count +=1 elif(b.get_text().strip())=='U-DEC': dec_un_count += 1 elif(b.get_text().strip())=='KO/TKO': ko_count += 1 elif(b.get_text().strip())=='SUB': sub_count += 1 temp_count+=1 red_total_maj_dec.append(dec_maj_count) red_total_split_dec.append(dec_split_count) red_total_un_dec.append(dec_un_count) red_total_ko.append(ko_count) red_total_sub.append(sub_count) #if (temp_count - 4) % 17 == 0: # #print(b.get_text().strip()) # round_raw = b.get_text() # round_stripped = round_raw.strip() # round_count+=int(round_stripped) # #print(round_count) # temp_count+=1 #blue_total_rounds.append(round_count) ################################################################ win_streak = 0 lose_streak =0 draw_count=0 longest_win_streak = 0 temp_win_streak = 0 total_losses = 0 total_wins = 0 end_streak = False #Set to true when the streak is over for r in red_results_raw: r=r.get_text() if r != 'next': d = dates_list_red.pop(0) event_date_parsed = parse(formatted_date) fight_date_parsed = parse(d) if fight_date_parsed < event_date_parsed: #print(f"{fight_date_parsed} is earlier than {event_date_parsed}") #print(r) if r=='draw': draw_count+=1 if end_streak == False: if r=='next': #Usually the first line. Just skip pass elif r=='win': if (win_streak>0): win_streak+=1 elif(win_streak==0 and lose_streak==0): win_streak+=1 else: end_streak = True elif r=='loss': if (lose_streak>0): lose_streak+=1 elif(win_streak==0 and lose_streak==0): lose_streak+=1 else: end_streak=True b = r if b=='draw': if temp_win_streak > longest_win_streak: longest_win_streak = temp_win_streak temp_win_streak = 0 if b=='win': temp_win_streak += 1 total_wins+=1 elif b=='loss': temp_win_streak = 0 total_losses+=1 if temp_win_streak > longest_win_streak: longest_win_streak = temp_win_streak #print(r) #print(f"Win Streak: {win_streak}. Lose streak: {lose_streak}") red_fighter_win_streak.append(win_streak) red_fighter_lose_streak.append(lose_streak) red_draw_list.append(draw_count) red_fighter_longest_win_streak.append(longest_win_streak) red_total_losses.append(total_losses) red_total_wins.append(total_wins) ################################################################### #onto some data we do not need to calculate #Sig Strikes Landed: {SLpM} #Sig Strikes Percent {Str. Acc} blue_strikes_raw = blue_soup.find_all('li', {'class':'b-list__box-list-item b-list__box-list-item_type_block'}) red_strikes_raw = red_soup.find_all('li', {'class':'b-list__box-list-item b-list__box-list-item_type_block'}) #print() #print() #print() s_count = 0 for s in blue_strikes_raw: if s_count == 5: blue_strikes = str(s) blue_strikes = blue_strikes.split('</i>') blue_strikes = blue_strikes[1] #print(temp) #There is a tag at the end we need to strip blue_strikes = blue_strikes[:-5] blue_strikes=blue_strikes.strip() #print(blue_strikes.strip()) blue_strike_list.append(blue_strikes) #print(s) if s_count == 6: blue_str_acc = str(s) blue_str_acc = blue_str_acc.split('</i>') blue_str_acc = blue_str_acc[1] #print(temp) #There is a tag at the end we need to strip blue_str_acc = blue_str_acc[:-5] blue_str_acc=blue_str_acc.strip() #print(blue_strikes.strip()) blue_strike_acc_list.append('.'+blue_str_acc[:-1]) #print(s) else: #I think we can get the value without caring too #much what it is..... This should save some coding isolate_stat = str(s) isolate_stat = isolate_stat.split('</i>') isolate_stat = isolate_stat[1] isolate_stat = isolate_stat[:-5] isolate_stat = isolate_stat.strip() if s_count == 13: sub_list.append(isolate_stat) if s_count == 10: td_list.append(isolate_stat) if s_count == 11: #td_accuracy #We need to remove the percent sign isolate_stat = isolate_stat[:-1] #We need to convert to decimal isolate_stat = float(isolate_stat) / 100 td_acc_list.append(isolate_stat) if s_count ==3: #Stance stance_list.append(isolate_stat) if s_count == 0: #Height #print(isolate_stat) #We need to split into feet and inches and #convert to cm.... isolate_stat = isolate_stat.replace("'", "") isolate_stat = isolate_stat.replace('"', '') height_tuple = isolate_stat.split(" ") if isolate_stat == ('--'): total_inches = 0 else: total_inches = int(height_tuple[0])*12 + int(height_tuple[1]) height_in_cm = total_inches * 2.54 #print(height_tuple) #print(total_inches) #print(height_in_cm) height_list.append(height_in_cm) if s_count == 2: #Reach isolate_stat = isolate_stat.replace('"', '') if isolate_stat == ('--'): reach_in_cm = height_in_cm else: reach_in_cm = int(isolate_stat) * 2.54 reach_list.append(reach_in_cm) if s_count == 1: #weight #print(isolate_stat) isolate_stat = isolate_stat.replace(" lbs.", '') #print(isolate_stat) weight_list.append(isolate_stat) if s_count == 4: #Age #print(isolate_stat) #print(formatted_date) if isolate_stat == '--': age = 30 else: birth_date = datetime.strptime(isolate_stat, "%b %d, %Y") age = date_datetime.year - birth_date.year - ((date_datetime.month, date_datetime.day) < (birth_date.month, birth_date.day)) blue_age_list.append(age) #print(s_count) #print(s) s_count+=1 #print() #print() #print() s_count = 0 for s in red_strikes_raw: if s_count == 5: red_strikes = str(s) red_strikes = red_strikes.split('</i>') red_strikes = red_strikes[1] #print(temp) #There is a tag at the end we need to strip red_strikes = red_strikes[:-5] red_strikes=red_strikes.strip() #print(blue_strikes.strip()) red_strike_list.append(red_strikes) #print(len(red_strike_list)) if s_count == 6: red_str_acc = str(s) red_str_acc = red_str_acc.split('</i>') red_str_acc = red_str_acc[1] #print(temp) #There is a tag at the end we need to strip red_str_acc = red_str_acc[:-5] red_str_acc=red_str_acc.strip() #print(blue_strikes.strip()) red_strike_acc_list.append('.'+red_str_acc[:-1]) #print(s) else: #I think we can get the value without caring too #much what it is..... This should save some coding isolate_stat = str(s) isolate_stat = isolate_stat.split('</i>') isolate_stat = isolate_stat[1] isolate_stat = isolate_stat[:-5] isolate_stat = isolate_stat.strip() if s_count == 13: red_sub_list.append(isolate_stat) if s_count == 10: red_td_list.append(isolate_stat) if s_count == 11: #td_accuracy #We need to remove the percent sign isolate_stat = isolate_stat[:-1] #We need to convert to decimal isolate_stat = float(isolate_stat) / 100 red_td_acc_list.append(isolate_stat) if s_count ==3: #Stance red_stance_list.append(isolate_stat) if s_count == 0: #Height #print(isolate_stat) #We need to split into feet and inches and #convert to cm.... isolate_stat = isolate_stat.replace("'", "") isolate_stat = isolate_stat.replace('"', '') height_tuple = isolate_stat.split(" ") total_inches = int(height_tuple[0])*12 + int(height_tuple[1]) height_in_cm = total_inches * 2.54 #print(height_tuple) #print(total_inches) #print(height_in_cm) red_height_list.append(height_in_cm) if s_count == 2: #Reach isolate_stat = isolate_stat.replace('"', '') if isolate_stat == '--': reach_in_cm = height_in_cm else: reach_in_cm = int(isolate_stat) * 2.54 red_reach_list.append(reach_in_cm) if s_count == 1: #weight #print(isolate_stat) isolate_stat = isolate_stat.replace(" lbs.", '') #print(isolate_stat) red_weight_list.append(isolate_stat) if s_count == 4: #Age if isolate_stat == '--': age = 30 else: birth_date = datetime.strptime(isolate_stat, "%b %d, %Y") age = date_datetime.year - birth_date.year - ((date_datetime.month, date_datetime.day) < (birth_date.month, birth_date.day)) red_age_list.append(age) s_count+=1 #THESE MIGHT BE FLIPPED! #Here we add all the lists to the dataframe if (is_most_recent): df['finish'] = finish_list df['finish_details'] = finish_details_list df['finish_round'] = finish_round_list df['finish_round_time'] = time_list def get_fight_time_secs(r, t): r = int(r) if r== '' or t == '': return '' else: calculated_time = 300 * (r-1) t_split = str(t).split(':') #Check for nan if t_split[0] != 'nan': #print(t_split[0]) calculated_time = calculated_time + 60 * int(t_split[0]) + int(t_split[1]) return calculated_time df['total_fight_time_secs'] = df.apply(lambda x: get_fight_time_secs(x['finish_round'], x['finish_round_time']), axis=1) df['Winner']= winner_list df['B_current_win_streak'] = blue_fighter_win_streak df['B_current_lose_streak'] = blue_fighter_lose_streak df['R_current_win_streak'] = red_fighter_win_streak df['R_current_lose_streak'] = red_fighter_lose_streak df['R_longest_win_streak'] = red_fighter_longest_win_streak df['B_longest_win_streak'] = blue_fighter_longest_win_streak df['B_losses'] = blue_total_losses df['R_losses'] = red_total_losses df['B_total_rounds_fought'] = blue_total_rounds df['R_total_rounds_fought'] = red_total_rounds df['B_total_title_bouts'] = blue_title_bouts df['R_total_title_bouts'] = red_title_bouts df['B_win_by_Decision_Majority'] = blue_total_maj_dec df['B_win_by_Decision_Split'] = blue_total_split_dec df['B_win_by_Decision_Unanimous'] = blue_total_un_dec df['B_win_by_KO/TKO'] = blue_total_ko df['B_win_by_Submission'] = blue_total_sub df['B_win_by_TKO_Doctor_Stoppage'] = 0 df['B_wins'] = blue_total_wins df['R_wins'] = red_total_wins df['R_win_by_Decision_Majority'] = red_total_maj_dec df['R_win_by_Decision_Split'] = red_total_split_dec df['R_win_by_Decision_Unanimous'] = red_total_un_dec df['R_win_by_KO/TKO'] = red_total_ko df['R_win_by_Submission'] = red_total_sub df['R_win_by_TKO_Doctor_Stoppage'] = 0 df['B_Reach_cms'] = reach_list df['B_Weight_lbs'] = weight_list df['R_Reach_cms'] = red_reach_list df['R_Weight_lbs'] = red_weight_list #Draws df['R_draw'] = red_draw_list df['B_draw'] = blue_draw_list df['B_avg_SIG_STR_landed'] = blue_strike_list df['R_avg_SIG_STR_landed'] = red_strike_list df['B_avg_SIG_STR_pct'] = blue_strike_acc_list df['R_avg_SIG_STR_pct'] = red_strike_acc_list df['B_avg_SUB_ATT'] = sub_list df['B_avg_TD_landed'] = td_list df['R_avg_SUB_ATT'] = red_sub_list df['R_avg_TD_landed'] = red_td_list df['B_avg_TD_pct'] = td_acc_list df['R_avg_TD_pct'] = red_td_acc_list df['B_Stance'] = stance_list df['B_Height_cms'] = height_list df['R_Stance'] = red_stance_list df['R_Height_cms'] = red_height_list df['B_age'] = blue_age_list df['R_age'] = red_age_list #Differences!!! df['win_streak_dif'] = df['B_current_win_streak'] - df['R_current_win_streak'] df['lose_streak_dif'] = df['B_current_lose_streak'] - df['R_current_lose_streak'] df['longest_win_streak_dif'] = df['B_longest_win_streak'] - df['R_longest_win_streak'] df['win_dif'] = df['B_wins'] - df['R_wins'] df['loss_dif'] = df['B_losses'] - df['R_losses'] df['total_round_dif'] = df['B_total_rounds_fought'] - df['R_total_rounds_fought'] df['total_title_bout_dif'] = df['B_total_title_bouts'] - df['R_total_title_bouts'] df['ko_dif'] = df['B_win_by_KO/TKO'] - df['R_win_by_KO/TKO'] df['sub_dif'] = df['B_win_by_Submission'] - df['R_win_by_Submission'] df['height_dif'] = df['B_Height_cms'] - df['R_Height_cms'] df['reach_dif'] = df['B_Reach_cms'] - df['R_Reach_cms'] df['sig_str_dif'] = df['B_avg_SIG_STR_landed'].astype(float) - df['R_avg_SIG_STR_landed'].astype(float) df['avg_sub_att_dif'] = df['B_avg_SUB_ATT'].astype(float) - df['R_avg_SUB_ATT'].astype(float) df['avg_td_dif'] = df['B_avg_TD_landed'].astype(float) - df['R_avg_TD_landed'].astype(float) df['empty_arena'] = 1 df['constant_1'] = 1 df['age_dif'] = df['B_age'] - df['R_age'] #print(blue_strike_acc_list) #print(sub_list) df.to_csv('scraper_helpers/scraped_event.csv', index=False) """ f = 'http://ufcstats.com/event-details/6e2b1d631832921d' html=urlopen(f) bs=BeautifulSoup(html, 'html.parser') """ """ title_fight_list = [] title_fight_raw = bs.find_all('tr', {'class':'b-fight-details__table-row'}) skip_row = True for f in title_fight_raw: if skip_row: skip_row = False else: #print(f) f = str(f) #print(f) if f.find('belt.png') > -1: title_fight_list.append(True) else: title_fight_list.append(False) print(title_fight_list) """ ###Output _____no_output_____ ###Markdown Get ranking data and update the scraped data ###Code %run scraper_helpers/get_rankings.ipynb %run scraper_helpers/combine_odds_and_completed_events.ipynb ###Output IS UPCOMING? False <tr><th scope="row"><a href="/fighters/Dustin-Poirier-2034"><span class="t-b-fcc">Dustin Poirier</span></a></th><td class="but-sg" data-li="[21,2,21773]"><span id="oID1021773212">-132</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[23,2,21773]"><span class="bestbet" id="oID1021773232">-125</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[22,2,21773]"><span id="oID1021773222">-129</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[24,2,21773]"><span id="oID1021773242">-130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,2,21773]"><span id="oID1021773272">-139</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[25,2,21773]"><span id="oID1021773252">-129</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[26,2,21773]"><span id="oID1021773262">-129</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[19,2,21773]"><span class="bestbet" id="oID1021773192">-125</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[20,2,21773]"><span class="bestbet" id="oID1021773202">-125</span><span class="ard arage-3">▼</span></td><td class="but-sg" data-li="[1,2,21773]"><span id="oID1021773012">-130</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,2,21773]"><span id="oID1021773122">-132</span><span "="" class="aru arage-2">▲</span></td><td class="button-cell but-si" data-li="[2,21773]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="21773"> 119<span class="exp-ard"></span></td></tr> <tr><th scope="row"><a href="/fighters/Gilbert-Burns-4747"><span class="t-b-fcc">Gilbert Burns</span></a></th><td class="but-sg" data-li="[21,1,22453]"><span id="oID1022453211">+140</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[23,1,22453]"><span id="oID1022453231">+135</span><span class="ard arage-3">▼</span></td><td class="but-sg" data-li="[22,1,22453]"><span id="oID1022453221">+130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[24,1,22453]"><span id="oID1022453241">+135</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,1,22453]"><span id="oID1022453271">+135</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[25,1,22453]"><span id="oID1022453251">+130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[26,1,22453]"><span id="oID1022453261">+130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[19,1,22453]"><span id="oID1022453191">+125</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[20,1,22453]"><span id="oID1022453201">+125</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[1,1,22453]"><span class="bestbet" id="oID1022453011">+142</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,1,22453]"><span class="bestbet" id="oID1022453121">+142</span><span "="" class="aru arage-2">▲</span></td><td class="button-cell but-si" data-li="[1,22453]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="22453"> 59<span class="exp-ard"></span></td></tr> <tr><th scope="row"><a href="/fighters/Tai-Tuivasa-7444"><span class="t-b-fcc">Tai Tuivasa</span></a></th><td class="but-sg" data-li="[21,2,22531]"><span id="oID1022531212">-132</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[23,2,22531]"><span id="oID1022531232">-125</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[22,2,22531]"><span id="oID1022531222">-130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[24,2,22531]"><span id="oID1022531242">-120</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,2,22531]"><span id="oID1022531272">-122</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[25,2,22531]"><span id="oID1022531252">-130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[26,2,22531]"><span id="oID1022531262">-130</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[19,2,22531]"><span id="oID1022531192">-125</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[20,2,22531]"><span id="oID1022531202">-133</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[1,2,22531]"><span id="oID1022531012">-117</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,2,22531]"><span class="bestbet" id="oID1022531122">-110</span><span "="" class="aru arage-2">▲</span></td><td class="button-cell but-si" data-li="[2,22531]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="22531"> 59<span class="exp-ard"></span></td></tr> <tr><th scope="row"><a href="/fighters/Irene-Aldana-5093"><span class="t-b-fcc">Irene Aldana</span></a></th><td class="but-sg" data-li="[21,1,23185]"><span id="oID1023185211">-102</span></td><td class="but-sg" data-li="[23,1,23185]"><span id="oID1023185231">+100</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[22,1,23185]"><span id="oID1023185221">-107</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[24,1,23185]"><span id="oID1023185241">+100</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,1,23185]"><span id="oID1023185271">-105</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[25,1,23185]"><span id="oID1023185251">-107</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[26,1,23185]"><span id="oID1023185261">-107</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[19,1,23185]"><span id="oID1023185191">-111</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[20,1,23185]"><span id="oID1023185201">-110</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[1,1,23185]"><span id="oID1023185011">+107</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,1,23185]"><span class="bestbet" id="oID1023185121">+114</span><span "="" class="aru arage-2">▲</span></td><td class="button-cell but-si" data-li="[1,23185]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="23185"> 58<span class="exp-ard"></span></td></tr> <tr><th scope="row"><a href="/fighters/Carlos-Condit-143"><span class="t-b-fcc">Carlos Condit</span></a></th><td class="but-sg" data-li="[21,1,23192]"><span id="oID1023192211">+148</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[23,1,23192]"><span id="oID1023192231">+155</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[22,1,23192]"><span id="oID1023192221">+150</span></td><td class="but-sg" data-li="[24,1,23192]"><span id="oID1023192241">+155</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,1,23192]"><span id="oID1023192271">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[25,1,23192]"><span id="oID1023192251">+150</span></td><td class="but-sg" data-li="[26,1,23192]"><span id="oID1023192261">+150</span></td><td class="but-sg" data-li="[19,1,23192]"><span id="oID1023192191">+150</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[20,1,23192]"><span id="oID1023192201">+145</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[1,1,23192]"><span class="bestbet" id="oID1023192011">+157</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,1,23192]"><span id="oID1023192121">+152</span><span "="" class="ard arage-2">▼</span></td><td class="button-cell but-si" data-li="[1,23192]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="23192"> 53<span class="exp-ard"></span></td></tr> <tr><th scope="row"><a href="/fighters/Niko-Price-6895"><span class="t-b-fcc">Niko Price</span></a></th><td class="but-sg" data-li="[21,2,23193]"><span class="bestbet" id="oID1023193212">+180</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[23,2,23193]"><span id="oID1023193232">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[22,2,23193]"><span id="oID1023193222">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[24,2,23193]"><span id="oID1023193242">+160</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[27,2,23193]"><span id="oID1023193272">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[25,2,23193]"><span id="oID1023193252">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[26,2,23193]"><span id="oID1023193262">+150</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[19,2,23193]"><span id="oID1023193192">+162</span><span "="" class="aru arage-2">▲</span></td><td class="but-sg" data-li="[20,2,23193]"><span id="oID1023193202">+140</span><span class="aru arage-3">▲</span></td><td class="but-sg" data-li="[1,2,23193]"><span id="oID1023193012">+170</span><span "="" class="ard arage-2">▼</span></td><td class="but-sg" data-li="[12,2,23193]"><span id="oID1023193122">+164</span><span "="" class="ard arage-2">▼</span></td><td class="button-cell but-si" data-li="[2,23193]"><svg class="svg-i" focusable="false" preserveaspectratio="xMidYMid meet" viewbox="0 0 24 24"><g><path d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z"></path></g></svg></td><td class="prop-cell prop-cell-exp" data-mu="23193"> 53<span class="exp-ard"></span></td></tr> ###Markdown if we are gathering data for an upcoming event lets make the dummy file ###Code if is_upcoming: df_upcoming = pd.read_csv('../data/kaggle_data/upcoming-event.csv') df_fighters = df_upcoming[['R_fighter', 'B_fighter']] df_dummy = df_fighters df_dummy['R_prob'] = 0.5 df_dummy['B_prob'] = 0.5 #df_dummy.to_csv('task-dummy.csv', index=False) df_dummy.to_csv('../data/kaggle_data/task-dummy.csv', index=False) ###Output _____no_output_____
hypothesisapi/notebooks/Using_hypothesisapi.ipynb
###Markdown NEW STUFFhttps://h.readthedocs.io/en/latest/api/ ###Code from hypothesis_settings import (USERNAME, TOKEN) import requests API_URL = "https://hypothes.is/api" # search rdhyee headers = {'Authorization': 'Bearer ' + TOKEN, 'Content-Type': 'application/json;charset=utf-8' } params = {'user': '[email protected]'} r = requests.get(API_URL + "/search", headers=headers, params=params) print(jpprint(r.json())) # GET /api # Host: hypothes.is # Accept: application/json ###Output _____no_output_____ ###Markdown OLD STUFF ###Code # package up logic in a package from hypothesisapi import API # include your hypothes.is USERNAME, PASSWORD as parameters in a hypothesis_settings.py file in your sys.path # get token from https://hypothes.is/profile/developer from hypothesis_settings import (USERNAME, TOKEN) h_api = API(USERNAME, TOKEN) # https://hypothes.is/a/8qXlSF8gTQmeh29v1XoErg # https://via.hypothes.is/http://www.meetup.com/SFOpenAnnotation/events/221577503/ # http://www.meetup.com/SFOpenAnnotation/events/221577503/ # http://www.webcitation.org/6Y2WtcAUJ annotation_id = "8qXlSF8gTQmeh29v1XoErg" rows = h_api.search_id(annotation_id) rows # Looking for the type of data fields in the annotations. (here: For rdhyee as rdhyee) from itertools import islice from collections import Counter key_count = Counter() for (i,row) in enumerate(islice(h_api.search(user='rdhyee', offset=0),None)): key_count.update(row.keys()) key_count h_api.get_annotation(annotation_id) ###Output _____no_output_____ ###Markdown AnnotationsKeys that seem to be present in all annotations:* id* created* updated* uri* permissions* user* consumer ###Code a0 = rows['rows'][0] (a0['id'], a0['created'], a0['updated'], a0['uri'], a0['permissions'], a0['user'], a0['consumer']) # for the annotation for the meetup a0.keys() a0['document'] a0['target'] ###Output _____no_output_____ ###Markdown Analyzing rdhyee's annotations ###Code import numpy as np import pandas as pd from pandas import (DataFrame, Series) import matplotlib.pyplot as plt # package up logic in a package from hypothesisapi import API # include your hypothes.is USERNAME, PASSWORD as parameters in a hypothesis_settings.py file in your sys.path from hypothesis_settings import (USERNAME, TOKEN) h_api = API(USERNAME, TOKEN) h_api.login() rdhyee_annotations = list(h_api.search(user='rdhyee', offset=0, )) len(rdhyee_annotations) df = DataFrame(rdhyee_annotations) df.head() import datetime import dateutil s = df.created.apply(dateutil.parser.parse).apply(lambda d: (d.year, d.month)).value_counts() s (s.sort_index(ascending=True).plot(kind='bar', color='green', # x='year/month', y='# of annotations' )) # get domain of uri df.uri.apply(lambda url: urlparse(url)[1]).value_counts() df.sort_index(by='created', ascending=True) ###Output _____no_output_____ ###Markdown using Jon's library ###Code from hypothesis_settings import (USERNAME, TOKEN) from Hypothes_is import Hypothesis, HypothesisAnnotation h = Hypothesis(USERNAME, TOKEN, max_results) list(h.search_all({'sort':'updated', 'order':'desc', 'user':'[email protected]', 'limit':1})) def test_create(): return (h.create_annotation_with_target_using_only_text_quote( url="https://www.nytimes.com/2017/05/06/world/europe/france-election-emmanuel-macron-marine-le-pen.html", exact=u"""Then, barely an hour before the official close of campaigning at midnight Friday, the staff of the presumed front-runner, Emmanuel Macron, a 39-year-old former investment banker, announced that his campaign had been the target of a “massive and coordinated” hacking operation.""", prefix="for its raw anger and insolence.", suffix="Internal emails and other docume", text="hello", tags=() )) r = test_create() r r.text ###Output _____no_output_____
examples/height.ipynb
###Markdown Think BayesThis notebook presents code and exercises from Think Bayes, second edition.Copyright 2018 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT ###Code # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot ###Output _____no_output_____ ###Markdown The height problemFor adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male? Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function? To represent the likelihood functions, I'll use `norm` from `scipy.stats`, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters. ###Code from scipy.stats import norm dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) dist_height['male'] ###Output _____no_output_____ ###Markdown Write a class that implements `Likelihood` using the frozen distributions. Here's starter code: ###Code class Height(Suite): def Likelihood(self, data, hypo): """ data: height in cm hypo: 'male' or 'female' """ return dist_height[hypo].SOMETHING(data) # Solution goes here ###Output _____no_output_____ ###Markdown Here's the prior. ###Code suite = Height(['male', 'female']) for hypo, prob in suite.Items(): print(hypo, prob) ###Output _____no_output_____ ###Markdown And the update: ###Code suite.Update(170) for hypo, prob in suite.Items(): print(hypo, prob) ###Output _____no_output_____ ###Markdown Compute the probability of being male as a function of height, for a range of values between 150 and 200. ###Code # Solution goes here # Solution goes here ###Output _____no_output_____ ###Markdown If you are curious, you can derive the mathematical form of this curve from the PDF of the normal distribution. How tall is A?Suppose I choose two residents of the U.S. at random. A is taller than B. How tall is A?What if I tell you that A is taller than B by more than 5 cm. How tall is A?For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Here are distributions that represent the heights of men and women in the U.S. ###Code dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) hs = np.linspace(130, 210) ps = dist_height['male'].pdf(hs) male_height_pmf = Pmf(dict(zip(hs, ps))); ps = dist_height['female'].pdf(hs) female_height_pmf = Pmf(dict(zip(hs, ps))); thinkplot.Pdf(male_height_pmf, label='Male') thinkplot.Pdf(female_height_pmf, label='Female') thinkplot.decorate(xlabel='Height (cm)', ylabel='PMF', title='Adult residents of the U.S.') ###Output _____no_output_____ ###Markdown Use `thinkbayes2.MakeMixture` to make a `Pmf` that represents the height of all residents of the U.S. ###Code # Solution goes here # Solution goes here ###Output _____no_output_____ ###Markdown Write a class that inherits from Suite and Joint, and provides a Likelihood function that computes the probability of the data under a given hypothesis. ###Code # Solution goes here ###Output _____no_output_____ ###Markdown Write a function that initializes your `Suite` with an appropriate prior. ###Code # Solution goes here suite = make_prior(mix) suite.Total() thinkplot.Contour(suite) thinkplot.decorate(xlabel='B Height (cm)', ylabel='A Height (cm)', title='Posterior joint distribution') ###Output _____no_output_____ ###Markdown Update your `Suite`, then plot the joint distribution and the marginal distribution, and compute the posterior means for `A` and `B`. ###Code # Solution goes here # Solution goes here # Solution goes here # Solution goes here ###Output _____no_output_____ ###Markdown Think BayesThis notebook presents code and exercises from Think Bayes, second edition.Copyright 2018 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT ###Code # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot ###Output _____no_output_____ ###Markdown The height problemFor adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male? Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function? To represent the likelihood functions, I'll use `norm` from `scipy.stats`, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters. ###Code from scipy.stats import norm dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) ###Output _____no_output_____ ###Markdown Write a class that implements `Likelihood` using the frozen distributions. Here's starter code: ###Code class Height(Suite): def Likelihood(self, data, hypo): """ data: height in cm hypo: 'male' or 'female' """ return dist_height[hypo].pdf(data) #Solution doesn't go here ###Output _____no_output_____ ###Markdown Here's the prior. ###Code suite = Height(['male', 'female']) for hypo, prob in suite.Items(): print(hypo, prob) ###Output male 0.5 female 0.5 ###Markdown And the update: ###Code suite.Update(170) for hypo, prob in suite.Items(): print(hypo, prob) ###Output male 0.4667199136812651 female 0.5332800863187349 ###Markdown Compute the probability of being male as a function of height, for a range of values between 150 and 200. ###Code def pMale(h): suite = Height(['male', 'female']) suite.Update(h) return suite['male'] hs=range(150,201); ps=[pMale(h) for h in hs]; thinkplot.plot(hs,ps) ###Output _____no_output_____ ###Markdown If you are curious, you can derive the mathematical form of this curve from the PDF of the normal distribution. How tall is A?Suppose I choose two residents of the U.S. at random. A is taller than B. How tall is A?What if I tell you that A is taller than B by more than 5 cm. How tall is A?For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Here are distributions that represent the heights of men and women in the U.S. ###Code dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) hs = np.linspace(130, 210) ps = dist_height['male'].pdf(hs) male_height_pmf = Pmf(dict(zip(hs, ps))); ps = dist_height['female'].pdf(hs) female_height_pmf = Pmf(dict(zip(hs, ps))); thinkplot.Pdf(male_height_pmf, label='Male') thinkplot.Pdf(female_height_pmf, label='Female') thinkplot.decorate(xlabel='Height (cm)', ylabel='PMF', title='Adult residents of the U.S.') ###Output _____no_output_____ ###Markdown Use `thinkbayes2.MakeMixture` to make a `Pmf` that represents the height of all residents of the U.S. ###Code from thinkbayes2 import MakeMixture mix=MakeMixture(Pmf([male_height_pmf,female_height_pmf])); thinkplot.Pdf(mix) # Solution goes here ###Output _____no_output_____ ###Markdown Write a class that inherits from Suite and Joint, and provides a Likelihood function that computes the probability of the data under a given hypothesis. ###Code class Taller(Suite, Joint): def Likelihood(self, data, hypo): ha,hb=hypo if data=='A': return 1 if ha>hb else 0 elif data=='B': return 1 if hb>ha else 0 ###Output _____no_output_____ ###Markdown Write a function that initializes your `Suite` with an appropriate prior. ###Code def make_prior(mixA,mixB): out = Taller() for k1,v1 in mixA.Items(): for k2,v2 in mixB.Items(): out[(k1,k2)]=v1*v2 return out suite = make_prior(mix,mix) suite.Total() thinkplot.Contour(suite) thinkplot.decorate(xlabel='B Height (cm)', ylabel='A Height (cm)', title='Prior joint distribution') ###Output _____no_output_____ ###Markdown Update your `Suite`, then plot the joint distribution and the marginal distribution, and compute the posterior means for `A` and `B`. ###Code suite = make_prior(mix,mix); for i in range(8): suite.Update('A') if i < 7: suite = make_prior(suite.Marginal(0),mix) elif i == 7: suite = make_prior(suite.Marginal(0),suite.Marginal(0)) suite.Update('B') thinkplot.Contour(suite) thinkplot.decorate(xlabel='B Height (cm)', ylabel='A Height (cm)', title='Posterior joint distribution') thinkplot.Pdf(suite.Marginal(0,label='A')) thinkplot.Pdf(suite.Marginal(1,label='B')) thinkplot.decorate(xlabel='A Height (cm)', ylabel='Pmf', title='Posterior marginals') print("A's posterior mean is %f"%suite.Marginal(0).Mean()) print("B's posterior mean is %f"%suite.Marginal(1).Mean()) Pm=0 for h,p in suite.Marginal(0).Items(): Pm+=p*pMale(h) print("Chance that A is male is %f"%Pm) ###Output Chance that A is male is 0.948014 ###Markdown Think BayesThis notebook presents code and exercises from Think Bayes, second edition.Copyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT ###Code # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd from thinkbayes2 import Pmf, Cdf, Suite import thinkplot ###Output _____no_output_____ ###Markdown The height problemFor adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male? Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function? Solution: To represent the likelihood functions, I'll use `norm` from `scipy.stats`, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters. ###Code from scipy.stats import norm dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) ###Output _____no_output_____ ###Markdown Now we can write a class that implements `Likelihood` using the frozen distributions. ###Code class Height(Suite): def Likelihood(self, data, hypo): """ data: height in cm hypo: 'male' or 'female' """ height = data return dist_height[hypo].pdf(height) ###Output _____no_output_____ ###Markdown Here's the prior. ###Code suite = Height(['male', 'female']) for hypo, prob in suite.Items(): print(hypo, prob) ###Output male 0.5 female 0.5 ###Markdown And the update: ###Code suite.Update(170) for hypo, prob in suite.Items(): print(hypo, prob) ###Output male 0.4667199136812651 female 0.5332800863187349 ###Markdown Someone who is 170 cm tall is slightly more likely to be female.More generally, we can compute the probability of being male as a function of height. ###Code heights = np.linspace(150, 200) prob_male = pd.Series(index=heights) for height in heights: suite = Height(['male', 'female']) suite.Update(height) prob_male[height] = suite['male'] ###Output _____no_output_____ ###Markdown And here's what it looks like. ###Code thinkplot.plot(prob_male) thinkplot.decorate(xlabel='Height (cm)', ylabel='Probability of being male') ###Output _____no_output_____ ###Markdown Think BayesThis notebook presents code and exercises from Think Bayes, second edition.Copyright 2018 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT ###Code # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot ###Output _____no_output_____ ###Markdown The height problemFor adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male? Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function? To represent the likelihood functions, I'll use `norm` from `scipy.stats`, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters. ###Code from scipy.stats import norm dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) ###Output _____no_output_____ ###Markdown Write a class that implements `Likelihood` using the frozen distributions. Here's starter code: ###Code class Height(Suite): def Likelihood(self, data, hypo): """ data: height in cm hypo: 'male' or 'female' """ return 1 # Solution goes here ###Output _____no_output_____ ###Markdown Here's the prior. ###Code suite = Height(['male', 'female']) for hypo, prob in suite.Items(): print(hypo, prob) ###Output _____no_output_____ ###Markdown And the update: ###Code suite.Update(170) for hypo, prob in suite.Items(): print(hypo, prob) ###Output _____no_output_____ ###Markdown Compute the probability of being male as a function of height, for a range of values between 150 and 200. ###Code # Solution goes here # Solution goes here ###Output _____no_output_____ ###Markdown If you are curious, you can derive the mathematical form of this curve from the PDF of the normal distribution. How tall is A?Suppose I choose two residents of the U.S. at random. A is taller than B. How tall is A?What if I tell you that A is taller than B by more than 5 cm. How tall is A?For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Here are distributions that represent the heights of men and women in the U.S. ###Code dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3)) hs = np.linspace(130, 210) ps = dist_height['male'].pdf(hs) male_height_pmf = Pmf(dict(zip(hs, ps))); ps = dist_height['female'].pdf(hs) female_height_pmf = Pmf(dict(zip(hs, ps))); thinkplot.Pdf(male_height_pmf, label='Male') thinkplot.Pdf(female_height_pmf, label='Female') thinkplot.decorate(xlabel='Height (cm)', ylabel='PMF', title='Adult residents of the U.S.') ###Output _____no_output_____ ###Markdown Use `thinkbayes2.MakeMixture` to make a `Pmf` that represents the height of all residents of the U.S. ###Code # Solution goes here # Solution goes here ###Output _____no_output_____ ###Markdown Write a class that inherits from Suite and Joint, and provides a Likelihood function that computes the probability of the data under a given hypothesis. ###Code # Solution goes here ###Output _____no_output_____ ###Markdown Write a function that initializes your `Suite` with an appropriate prior. ###Code # Solution goes here suite = make_prior(mix) suite.Total() thinkplot.Contour(suite) thinkplot.decorate(xlabel='B Height (cm)', ylabel='A Height (cm)', title='Posterior joint distribution') ###Output _____no_output_____ ###Markdown Update your `Suite`, then plot the joint distribution and the marginal distribution, and compute the posterior means for `A` and `B`. ###Code # Solution goes here # Solution goes here # Solution goes here # Solution goes here ###Output _____no_output_____
notebooks/perturbed-parameter-ensemble/global-warming-index.ipynb
###Markdown Import dependencies ###Code import numpy as np import sys import pandas as pd import matplotlib.pyplot as plt import seaborn as sn import scipy as sp from tqdm import tqdm import glob import xarray as xr from fair import * from fair.scripts.stats import * %matplotlib inline ###Output _____no_output_____ ###Markdown Calculation of the global warming indexHere we compute an estimate of the present-day contribution of anthropogenic forcing to the observed change in Global Mean Surface Temperature, taking key uncertainties into account. This estimate follows the methodology of Haustein et al., 2017.References:Haustein, K., Allen, M. R., Forster, P. M., Otto, F. E. L., Mitchell, D. M., Matthews, H. D., & Frame, D. J. (2017). A real-time Global Warming Index. Scientific Reports, 7(1), 15417. https://doi.org/10.1038/s41598-017-14828-5 Generate forcing shapesHere we generate a wide range of anthropogenic and natural forcing timeseries, sampling uncertainties in each component independently following Forster et al., 2013.References:Forster, P. M., Andrews, T., Good, P., Gregory, J. M., Jackson, L. S., & Zelinka, M. (2013). Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models. Journal of Geophysical Research: Atmospheres, 118(3), 1139–1150. https://doi.org/10.1002/jgrd.50174Myhre, G., Shindell, D., Bréon, F.-M., Collins, W., Fuglestvedt, J., Huang, J., … Zhang, H. (2013). Anthropogenic and Natural Radiative Forcing. In T. F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S. K. Allen, J. Boschung, … P. M. Midgley (Eds.), Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (pp. 659–740). https://doi.org/10.1017/CBO9781107415324.018 ###Code ## import base data: erf_ar6 = pd.read_csv('https://raw.githubusercontent.com/Priestley-Centre/ssp_erf/master/SSPs/ERF_ssp245_1750-2500.csv',index_col=0,dtype=float) erf_ar6 -= erf_ar6.loc[1750] ## update ozone to Skeie et al. 2021 def get_skeie_ts(fname): data = pd.read_csv(fname,skiprows=3,sep=';',index_col=0)['NET adj.'] data.index = [int(x[:4]) for x in data.index] data.name = fname.split('/')[-1].split('_')[2] return data/1000 skeie_files = glob.glob('../../aux/input-data/_hidden/histO3/Kernel_output/TotRF/CMIP6/*.csv') skeie_o3_data = pd.concat([get_skeie_ts(x) for x in skeie_files],axis=1) oslo_CTM_data = get_skeie_ts('../../aux/input-data/_hidden/histO3/Kernel_output/TotRF/OsloCTM3/RF_NRFmethod_OsloCTM3_net_yearly.csv') skeie_o3_data = pd.concat([skeie_o3_data,oslo_CTM_data],axis=1) good_models = ['BCC-ESM1', 'CESM2-WACCM', 'GFDL-ESM4', 'GISS-E2-1-H', 'MRI-ESM2-0', 'OsloCTM3'] # take multi-model mean of "good" models skeie_o3_ts = skeie_o3_data[good_models].mean(axis=1).replace(np.nan,0) skeie_o3_ts.loc[1750] = -0.03 skeie_o3_ts -= skeie_o3_ts.loc[1750] skeie_o3_ts.sort_index(inplace=True) # remove OsloCTM3 bias relative to MMM for 2014-> data skeie_o3_ts.loc[2014:] += skeie_o3_data.loc[2010,good_models].mean() - skeie_o3_data.loc[2010,'OsloCTM3'] ## combine & drop previous ozone estimates erf_ar6 = erf_ar6.loc[1750:2019] erf_ar6.loc[:,'ozone'] = skeie_o3_ts.reindex(np.arange(1750,2021)).interpolate().loc[1750:2019] erf_ar6.drop(['o3_tropospheric','o3_stratospheric'],axis=1,inplace=True) ## creating a forcing response dataset def generate_forcing(N): ## generating many rf shapes for the GWI rf_factors = {} rf_factors['co2'] = sp.stats.norm(1,0.20/1.645).rvs(N) rf_factors['ch4'] = sp.stats.norm(1,0.28/1.645).rvs(N) rf_factors['n2o'] = sp.stats.norm(1,0.2/1.645).rvs(N) rf_factors['other_wmghg'] = sp.stats.norm(1,0.20/1.645).rvs(N) rf_factors['ozone'] = sp.stats.norm(1,0.5/1.645).rvs(N) rf_factors['h2o_stratospheric'] = sp.stats.norm(1,0.72/1.645).rvs(N) rf_factors['contrails'] = sp.stats.norm(1,0.75/1.645).rvs(N) rf_factors['land_use'] = sp.stats.norm(1,0.75/1.645).rvs(N) rf_factors['volcanic'] = sp.stats.norm(1,0.5/1.645).rvs(N) rf_factors['solar'] = sp.stats.norm(1,1/1.645).rvs(N) pct_5 = 0.04 / 0.08 pct_95 = 0.18 / 0.08 sigma = (np.log(pct_95) - np.log(pct_5)) / (sp.stats.norm().ppf(0.95)-sp.stats.norm().ppf(0.05)) mu = np.log(pct_5) - sigma * sp.stats.norm().ppf(0.05) rf_factors['bc_on_snow'] = sp.stats.lognorm(s=sigma,scale=np.exp(mu)).rvs(N) ant_rf_ensemble = np.zeros((270,N)) nat_rf_ensemble = np.zeros((270,N)) for agent in ['co2','ch4','n2o','other_wmghg','ozone','h2o_stratospheric','contrails','land_use','bc_on_snow']: ant_rf_ensemble += rf_factors[agent]*erf_ar6.loc[1750:2019,agent].values[:,None] for agent in ['volcanic','solar']: nat_rf_ensemble += rf_factors[agent]*erf_ar6.loc[1750:2019,agent].values[:,None] ## aerosol distribution ERFari_shape = erf_ar6.loc[1750:2019,['aerosol-radiation_interactions']].values.T ERFaci_shape = erf_ar6.loc[1750:2019,['aerosol-cloud_interactions']].values.T ## construct the Smith distributions def fit_skewnorm(x,X,percentiles): distr = sp.stats.skewnorm(*x) return sum(abs(distr.ppf(percentiles) - X)) ERFaci_smith_sknorm_params = sp.optimize.minimize(fit_skewnorm,x0=[1,1,1],args=([0.13,0.59,1.17],[0.05,0.5,0.95]),method='nelder-mead').x ERFari_smith_sknorm_params = sp.optimize.minimize(fit_skewnorm,x0=[1,1,1],args=([0.07,0.27,0.6],[0.05,0.5,0.95]),method='nelder-mead').x ERFaci_smith_sknorm_distr = -1*sp.stats.skewnorm(*ERFaci_smith_sknorm_params).rvs(N)[:,None] ERFari_smith_sknorm_distr = -1*sp.stats.skewnorm(*ERFari_smith_sknorm_params).rvs(N)[:,None] ERFari_samples = ERFari_smith_sknorm_distr / ERFari_shape[0,-1] * ERFari_shape ERFaci_samples = ERFaci_smith_sknorm_distr / ERFaci_shape[0,-1] * ERFaci_shape tot_aer = (ERFari_samples+ERFaci_samples).T ## Combine all anthro forcings ant_rf_ensemble += tot_aer ## rf_nat = pd.DataFrame(nat_rf_ensemble,index=np.arange(1750,2020),columns=pd.MultiIndex.from_product([['forcing_'+str(x) for x in np.arange(N)],['forcing']])).loc[:2019] rf_ant = pd.DataFrame(ant_rf_ensemble,index=np.arange(1750,2020),columns=pd.MultiIndex.from_product([['forcing_'+str(x) for x in np.arange(N)],['forcing']])).loc[:2019] return rf_ant,rf_nat # set the number of forcing shapes to use N_forc = 5000 rf_ant,rf_nat = generate_forcing(N_forc) fig,ax = plt.subplots(1,2,figsize=(8,3)) rf_ant.plot(ax=ax[0],lw=0.02,c='k',legend=None) rf_nat.plot(ax=ax[1],lw=0.02,c='k',legend=None) ###Output _____no_output_____ ###Markdown Generate response parametersHere we generate a range of response model parameters, aiming to sample the full range of behaviours exhibited. To do this, we sample a range of response timescales, and realised warming fractions. These ranges correspond approximately to the parameter ranges observed within CMIP5/6. ###Code def generate_response_params(): ## create the response parameter ranges: ### based on inferred ranges from CMIP6 - 18 combinations total response_names = ['response_'+str(x) for x in np.arange(24)] response_params = pd.DataFrame(index=['d','q'],columns = pd.MultiIndex.from_product([response_names,[1,2,3]])).apply(pd.to_numeric) response_params.loc[:] = 0 d1_range = [0.2,0.8,1.4,2] d2_range = [4,8,12,16] d3_range = [100,200,400,800] q1_range = [0.04,0.16,0.28,0.4] RWF_range = [0.3 , 0.4 , 0.5, 0.6, 0.7, 0.8] ECS=3 i=0 for d_num in np.arange(4): for RWF in RWF_range: response_params.loc['d',(response_names[i],1)] = d1_range[d_num] response_params.loc['d',(response_names[i],2)] = d2_range[d_num] response_params.loc['d',(response_names[i],3)] = d3_range[d_num] response_params.loc['q',(response_names[i],1)] = q1_range[d_num] v1 = (1-(d1_range[d_num]/69.66) * (1-np.exp(-69.66/d1_range[d_num])) ) v2 = (1-(d2_range[d_num]/69.66) * (1-np.exp(-69.66/d2_range[d_num])) ) v3 = (1-(d3_range[d_num]/69.66) * (1-np.exp(-69.66/d3_range[d_num])) ) TCR = RWF * ECS F_2x = 3.76 q3 = (((TCR/F_2x) - q1_range[d_num]*(v1-v2) - (ECS/F_2x)*v2) / (v3-v2)) q2 = (ECS/F_2x - q1_range[d_num] - q3) response_params.loc['q',(response_names[i],2)] = q2 response_params.loc['q',(response_names[i],3)] = q3 i+=1 ## remove any that are unphysical (negative parameters) response_params = response_params.reindex([i for i,x in ((response_params<0).sum().unstack().sum(axis=1)==1).iteritems() if not x],axis=1,level=0) return response_params response_params = generate_response_params() ###Output _____no_output_____ ###Markdown Generate temperature profilesRun the forcing shapes generated over the range of impulse-response model parameters. ###Code ## generate the temperature responses: null_gas_params = pd.read_csv('../../aux/parameter-sets/Complete_gas_cycle_params.csv',header=[0,1],index_col=0).reindex(['carbon_dioxide'],axis=1,level=1) ## temp_ant = run_FaIR(emissions_in=return_empty_emissions(rf_ant,gases_in=['carbon_dioxide']),gas_parameters=null_gas_params,forcing_in=rf_ant,thermal_parameters=response_params)['T'].droplevel(1,axis=1) temp_nat = run_FaIR(emissions_in=return_empty_emissions(rf_nat,gases_in=['carbon_dioxide']),gas_parameters=null_gas_params,forcing_in=rf_nat,thermal_parameters=response_params)['T'].droplevel(1,axis=1) ###Output Integrating 5000 scenarios, 1 gas cycle parameter sets, 18 thermal response parameter sets, over ['carbon_dioxide'] forcing agents, between 1750 and 2019... ###Markdown save temperature series to netcdfs for combination with OLSE coefficients later ###Code xr.DataArray(temp_ant.values, dims=['time','index'], coords=dict(time=pd.to_datetime(temp_ant.index.astype('str')),index=temp_ant.columns)).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/ant_temperature.nc') xr.DataArray(temp_nat.values, dims=['time','index'], coords=dict(time=pd.to_datetime(temp_ant.index.astype('str')),index=temp_ant.columns)).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/nat_temperature.nc') ###Output _____no_output_____ ###Markdown Retrieve CMIP6 internal variability & subsampleWe generate many samples of internal variability from CMIP6 piControl simulations. We reject the samples if the drift is greater than 0.15 K / century. Overall, we generate 102 samples of internal variability of the same length as our observational GMST datasets, two samples (drawn at random from the piControl) from each of the 51 CMIP6 models available. ###Code ## computing the uncertainty from internal variability def generate_IV(series_length=170): ### get internal variability timeseries: piControl_data = pd.read_csv('../../aux/input-data/CMIP6/piControl.csv',index_col=0) for i,x in enumerate(['source','member','variable']): piControl_data.loc[x] = [x.split('_')[i+1] for x in piControl_data.columns] CMIP6_int_var=piControl_data.T.set_index(['source','member','variable']).T.xs('tas',axis=1,level=-1).apply(pd.to_numeric) ### subsample 100 * 170 year slices ### check if drift, discard if drift > 0.15 K / century ### randomly pick one non-drifting sample CMIP6_int_var_samples = pd.DataFrame(index=np.arange(series_length)) for model,data in CMIP6_int_var.iteritems(): arr = data.dropna().values chunksize = arr.size if chunksize<series_length: continue starting_points = np.random.choice(np.arange(chunksize-series_length),100) nodrift_points = [] for s in starting_points: subarr = arr[s:s+series_length] drift = sp.stats.linregress(np.arange(series_length),subarr).slope if abs(drift)*100>0.15: continue else: nodrift_points+=[s] chosen_start = np.random.choice(nodrift_points) index_name = '_'.join(list(model))+'_'+str(chosen_start) CMIP6_int_var_samples.loc[:,index_name] = arr[chosen_start:chosen_start+series_length] - arr[chosen_start:chosen_start+series_length].mean() ### remove model degeneracies (take first member of each model): models = [] chosen_members = [] for model,data in CMIP6_int_var_samples.dropna(axis=1).iteritems(): if model.split('_')[0] in models: continue else: chosen_members+=[model] models+=[model.split('_')[0]] return CMIP6_int_var_samples[chosen_members] # 50 members seems insufficient to fully span the range of internal variability uncertainty, so we randomly sample from each model twice CMIP6_int_var_samples_nondeg = pd.concat([generate_IV(),generate_IV()],axis=1) CMIP6_int_var_samples_nondeg.index = np.arange(1850,2020) CMIP6_int_var_samples_nondeg.plot(legend=None,lw=0.1,c='k') ###Output _____no_output_____ ###Markdown Choose and import the observational temperature dataset (updated to start of 2021)Available datasets are below. We add uncertainty from the HadCRUT5 ensemble to non-ensemble products (GISTEMP, NOAA & Berkeley) as this represents the most conservative estimate of uncertainty.[GISTEMP](https://data.giss.nasa.gov/gistemp/) - single series, augment with HadCRUT5 ensemble- ERSSTv5 + GHCNv4[NOAA](https://www.ncei.noaa.gov/data/noaa-global-surface-temperature/v5/access/timeseries/) - single series, augment with HadCRUT5 ensemble- ERSSTv5 + GHCNv4[BERKELEY](http://berkeleyearth.org/data/) - singles series, augment with HadCRUT5 ensemble- HadSST3 + Berkeley land[CW](https://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html) - 99 (100) member ensemble- infilled HadCRUT4[HadCRUT5](http://data.ceda.ac.uk/badc/ukmo-hadobs/data/insitu/MOHC/HadOBS/HadCRUT/HadCRUT5/analysis/diagnostics) - 200 member ensemble- HadSST4 + CRUTEM5[HadCRUT4](https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html) - 100 member ensemble- HadSST3 + CRUTEM4References:Morice, C. P., Kennedy, J. J., Rayner, N. A., Winn, J. P., Hogan, E., Killick, R. E., … Simpson, I. R. (2020). An updated assessment of near‐surface temperature change from 1850: the HadCRUT5 dataset. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1029/2019JD032361Morice, C. P., Kennedy, J. J., Rayner, N. A., Jones, P. D., P., M. C., J., K. J., … D., J. P. (2011). Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set. Journal of Geophysical Research: Atmospheres, 117(D8). https://doi.org/10.1029/2011JD017187Cowtan, K., & Way, R. G. (2014). Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quarterly Journal of the Royal Meteorological Society, 140(683), 1935–1944. https://doi.org/10.1002/qj.2297Lenssen, N. J. L., Schmidt, G. A., Hansen, J. E., Menne, M. J., Persin, A., Ruedy, R., & Zyss, D. (2019). Improvements in the uncertainty model in the Goddard Institute for Space Studies Surface Temperature (GISTEMP) analysis. Journal of Geophysical Research: Atmospheres, 2018JD029522. https://doi.org/10.1029/2018JD029522Vose, R. S., Arndt, D., Banzon, V. F., Easterling, D. R., Gleason, B., Huang, B., … Wuertz, D. B. (2012). NOAA’s Merged Land–Ocean Surface Temperature Analysis. Bulletin of the American Meteorological Society, 93(11), 1677–1685. https://doi.org/10.1175/BAMS-D-11-00241.1Rohde, R. A., & Hausfather, Z. (2020). The Berkeley Earth Land/Ocean Temperature Record. Earth System Science Data, 12(4), 3469–3479. https://doi.org/10.5194/essd-12-3469-2020 ###Code def get_GMST(select_dataset): ## retrieve chosen dataset: if select_dataset == 'HadCRUT5': GMST = xr.open_dataset('../../aux/input-data/Temperature-observations/HadCRUT.5.0.1.0.analysis.ensemble_series.global.monthly.nc').tas.to_pandas().T GMST.columns.name = 'HadCRUT5_obs_mem' if select_dataset == 'HadCRUT4': GMST = pd.concat([pd.read_csv(x,header=None,delim_whitespace=True,index_col=0,usecols=[0,1]).iloc[:,0] for x in sorted(glob.glob('../../aux/input-data/Temperature-observations/HadCRUT.4.6.0.0.monthly_ns_avg_realisations/*.txt'))],axis=1,keys=np.arange(100)) GMST.index = pd.to_datetime(GMST.index) GMST.columns.name = 'HadCRUT4_obs_mem' if select_dataset == 'NOAA': GMST = pd.read_csv('../../aux/input-data/Temperature-observations/aravg.mon.land_ocean.90S.90N.v5.0.0.202101.asc',delim_whitespace=True,names=['date','month','anom','unc_full'],usecols=[0,1,2,3]).apply(pd.to_numeric) GMST.index = pd.to_datetime(GMST.date.astype(str)+'-'+GMST.month.astype(str)) unc = xr.open_dataset('../../aux/input-data/Temperature-observations/HadCRUT.5.0.1.0.analysis.ensemble_series.global.monthly.nc').tas.to_pandas().T unc = unc.sub(unc.median(axis=1),axis=0) GMST = (unc.loc['1880':'2020']+GMST.sort_index().loc['1880':'2020',['anom']].values) GMST.columns.name = 'HadCRUT5_obs_mem' if select_dataset == 'GISTEMP': GMST = pd.read_csv('../../aux/input-data/Temperature-observations/GLB.Ts+dSST.csv',skiprows=2,names=['year']+list(range(1,13)),usecols=range(13),index_col=0).unstack().reset_index() GMST.index = pd.to_datetime(GMST.year.astype(str)+'-'+GMST.level_0.astype(str)) unc = xr.open_dataset('../../aux/input-data/Temperature-observations/HadCRUT.5.0.1.0.analysis.ensemble_series.global.monthly.nc').tas.to_pandas().T unc = unc.sub(unc.median(axis=1),axis=0) GMST = (unc.loc['1880':'2020']+GMST.sort_index().loc['1880':'2020',[0]].astype(float).values) GMST.columns.name = 'HadCRUT5_obs_mem' if select_dataset == 'CW': GMST=pd.read_csv('../../aux/input-data/Temperature-observations/had4_krig_ensemble_v2_0_0.txt',delim_whitespace=True,index_col=0,names=['date']+list(range(100))) GMST.index = pd.to_datetime(GMST.index.astype(int).astype(str)+'-'+np.ceil((GMST.index%1)*12).astype(int).astype(str)) # current version of the dataset has a null member at 32 GMST = GMST.drop(32,axis=1) GMST.columns.name = 'CW_obs_mem' if select_dataset == 'BERKELEY': # NB this uses air temps above sea ice, not water temps GMST = pd.read_csv('../../aux/input-data/Temperature-observations/Land_and_Ocean_complete.txt',skiprows=76,skipfooter=2057,delim_whitespace=True,usecols=[0,1,2,3],names=['date','month','anom','unc_full']) GMST.index = pd.to_datetime(GMST.date.astype(str)+'-'+GMST.month.astype(str)) unc = xr.open_dataset('../../aux/input-data/Temperature-observations/HadCRUT.5.0.1.0.analysis.ensemble_series.global.monthly.nc').tas.to_pandas().T unc = unc.sub(unc.median(axis=1),axis=0) GMST = (unc.loc['1850':'2020']+GMST.sort_index().loc['1850':'2020',['anom']].values) GMST.columns.name = 'HadCRUT5_obs_mem' return GMST def run_GWI(select_dataset,temp_ant,temp_nat): GMST = get_GMST(select_dataset) ## resample GMST to annual: GMST = GMST.resample('y').mean() GMST.index = GMST.index.year.astype(int) GMST = GMST.loc[:2019] # regress temps vs observed temps ant_coefs = np.empty((GMST.shape[1],temp_ant.shape[1])) nat_coefs = np.empty((GMST.shape[1],temp_nat.shape[1])) R2_vals = np.empty((GMST.shape[1],temp_ant.shape[1])) temp_vstack = np.array([temp_ant.loc[GMST.index].values,temp_nat.loc[GMST.index].values]).T for i in tqdm(np.arange(ant_coefs.shape[1])): _lreg = OLSE_NORM(temp_vstack[i],GMST.values) ant_coefs[:,i] = _lreg['coefs'][0] nat_coefs[:,i] = _lreg['coefs'][1] R2_vals[:,i] = _lreg['R2'] ## save results xr.DataArray(ant_coefs, dims=[GMST.columns.name,'index'], coords={GMST.columns.name:GMST.columns,'index':temp_ant.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/ant_coefs_forc_'+select_dataset+'.nc') xr.DataArray(nat_coefs, dims=[GMST.columns.name,'index'], coords={GMST.columns.name:GMST.columns,'index':temp_nat.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/nat_coefs_forc_'+select_dataset+'.nc') xr.DataArray(R2_vals, dims=[GMST.columns.name,'index'], coords={GMST.columns.name:GMST.columns,'index':temp_nat.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/R2_forc_'+select_dataset+'.nc') ## regress temps vs IV ant_coefs_iv = np.empty((CMIP6_int_var_samples_nondeg.shape[1],temp_ant.shape[1])) nat_coefs_iv = np.empty((CMIP6_int_var_samples_nondeg.shape[1],temp_nat.shape[1])) R2_vals_iv = np.empty((CMIP6_int_var_samples_nondeg.shape[1],temp_nat.shape[1])) IV_regress = CMIP6_int_var_samples_nondeg.loc[GMST.index].values for i in tqdm(np.arange(ant_coefs_iv.shape[1])): _lreg = OLSE_NORM(temp_vstack[i],IV_regress) ant_coefs_iv[:,i] = _lreg['coefs'][0] nat_coefs_iv[:,i] = _lreg['coefs'][1] R2_vals_iv[:,i] = _lreg['R2'] xr.DataArray(ant_coefs_iv, dims=['int_var_mem','index'], coords={'int_var_mem':CMIP6_int_var_samples_nondeg.columns,'index':temp_ant.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/ant_coefs_IV_'+select_dataset+'.nc') xr.DataArray(nat_coefs_iv, dims=['int_var_mem','index'], coords={'int_var_mem':CMIP6_int_var_samples_nondeg.columns,'index':temp_nat.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/nat_coefs_IV_'+select_dataset+'.nc') xr.DataArray(R2_vals_iv, dims=['int_var_mem','index'], coords={'int_var_mem':CMIP6_int_var_samples_nondeg.columns,'index':temp_nat.columns}).unstack('index').rename({'Scenario':'forcing_mem','Thermal set':'response_mem'}).to_netcdf('../../aux/output-data/global-warming-index/R2_IV_'+select_dataset+'.nc') for select_dataset in ['HadCRUT5','HadCRUT4','NOAA','GISTEMP','CW','BERKELEY']: print('calculating GWI for '+select_dataset) run_GWI(select_dataset,temp_ant,temp_nat) ###Output calculating GWI for HadCRUT5 ###Markdown Combine the different datasets & compute the rate of warmingWe do this for the anthropogenic component in isolation since that is what we use to constrain our perturbed parameter ensemble. Data is converted to single precision upon loading as double is overkill.We then subsample & save the resulting data as a numpy input file. *Recommend a kernel restart before attempting this.* ###Code ## get temperature timeseries ant_temps = xr.open_dataarray('../../aux/output-data/global-warming-index/ant_temperature.nc',chunks={'forcing_mem':100}) T_2010_2019 = (ant_temps.sel(time=slice('2010','2019')).mean('time')-ant_temps.sel(time=slice('1850','1900')).mean('time')).astype(np.single) dT_2010_2019 = ant_temps.sel(time=slice('2010','2019')).assign_coords(time=np.arange(10)).polyfit(dim='time',deg=1).polyfit_coefficients.sel(degree=1).astype(np.single) ## load scaling coefficients and compute level/rate (very memory intensive) # 500 million member subsamples to start = 2GB on disk per quantity sub_size = int(5e8) subsamples={} ## HadCRUT5 subsample subsamples['HadCRUT5_obs_mem'] = np.random.choice(5000*102*18*200,sub_size) ## HadCRUT4 subsample subsamples['HadCRUT4_obs_mem'] = np.random.choice(5000*102*18*100,sub_size) ## CW subsample subsamples['CW_obs_mem'] = np.random.choice(5000*102*18*99,sub_size) for select_dataset in ['HadCRUT5','HadCRUT4','NOAA','GISTEMP','CW','BERKELEY']: ant_coefs = (xr.open_dataarray('../../aux/output-data/global-warming-index/ant_coefs_forc_'+select_dataset+'.nc',chunks={'forcing_mem':100}).astype(np.single)+\ xr.open_dataarray('../../aux/output-data/global-warming-index/ant_coefs_IV_'+select_dataset+'.nc',chunks={'forcing_mem':100}).astype(np.single)) obsv_unc_source = ant_coefs.dims[0] np.save('../../aux/output-data/global-warming-index/results/T_2010-2019_'+select_dataset+'.npy',(T_2010_2019*ant_coefs).values.flatten()[subsamples[obsv_unc_source]]) np.save('../../aux/output-data/global-warming-index/results/dT_2010-2019_'+select_dataset+'.npy',(dT_2010_2019*ant_coefs).values.flatten()[subsamples[obsv_unc_source]]) ###Output _____no_output_____ ###Markdown Binning the anthropogenic warming index to constrain the FULL ensembleIn this final step, we bin the 2d distribution of 2010-19 level/rate of anthropogenic warming. These counts within these bins are our estimate of the likelihood of a particular region of the space. We use this likelihood estimate to generate probabilities of each member of the FULL perturbed parameter ensemble (based on their location in the space).We use a binning procedure, rather than (for example) a kernel-density estimate as the size of the AWI sample is such that other methods are far too inefficient. ###Code FULL_metrics = pd.read_hdf('../../aux/parameter-sets/perturbed-parameters/FULL_ANT.h5') FULL_level = FULL_metrics.T_2010_2019 - FULL_metrics.T_1850_1900 FULL_rate = FULL_metrics.dT_2010_2019 ALT_metrics = pd.read_hdf('../../aux/parameter-sets/perturbed-parameters/ALT_ANT.h5') ALT_level = ALT_metrics.T_2010_2019 - ALT_metrics.T_1850_1900 ALT_rate = ALT_metrics.dT_2010_2019 ## choose resolution of bins here (delibarately large bins) level_bins = np.arange(-0.2,1.8,0.01) rate_bins = np.arange(-0.05,0.1,0.001) for select_dataset in ['HadCRUT5','HadCRUT4','NOAA','GISTEMP','CW','BERKELEY']: ## import the pre-computed AWI level & rate warming_level = np.load('../../aux/output-data/global-warming-index/results/T_2010-2019_'+select_dataset+'.npy') warming_rate = np.load('../../aux/output-data/global-warming-index/results/dT_2010-2019_'+select_dataset+'.npy') ## bin the data in 2d AWI_binned = sp.stats.binned_statistic_2d(warming_level,warming_rate,None,'count',bins=[level_bins,rate_bins]) AWI_likelihood = AWI_binned.statistic / AWI_binned.statistic.max() ## create a dataseries to store the member probabilities FULL_probabilities = pd.Series(index=FULL_rate.index,dtype=float) ALT_probabilities = pd.Series(index=ALT_rate.index,dtype=float) ## set values outside the AWI max/min values to have 0 probability FULL_probabilities.loc[(FULL_level>warming_level.max())|(FULL_level<-warming_level.min())|(FULL_rate>warming_rate.max())|(FULL_rate<warming_rate.min())] = 0 ALT_probabilities.loc[(ALT_level>warming_level.max())|(ALT_level<-warming_level.min())|(ALT_rate>warming_rate.max())|(ALT_rate<warming_rate.min())] = 0 FULL_binned = sp.stats.binned_statistic_2d(FULL_level.loc[FULL_probabilities.isna()],FULL_rate.loc[FULL_probabilities.isna()],None,'count',bins=[level_bins,rate_bins],expand_binnumbers=True) ALT_binned = sp.stats.binned_statistic_2d(ALT_level.loc[ALT_probabilities.isna()],ALT_rate.loc[ALT_probabilities.isna()],None,'count',bins=[level_bins,rate_bins],expand_binnumbers=True) ## have to reduce binnumbers by 2 as scipy adds one boundary bin, and bins start from 1 FULL_probabilities.loc[FULL_probabilities.isna()] = AWI_likelihood[FULL_binned.binnumber[0]-2,FULL_binned.binnumber[1]-2] ALT_probabilities.loc[ALT_probabilities.isna()] = AWI_likelihood[ALT_binned.binnumber[0]-2,ALT_binned.binnumber[1]-2] ## save the probabilities FULL_probabilities.to_hdf(r'../../aux/parameter-sets/perturbed-parameters/FULL_selection_probability-'+select_dataset+'.h5', key='stage', mode='w') ALT_probabilities.to_hdf(r'../../aux/parameter-sets/perturbed-parameters/ALT_selection_probability-'+select_dataset+'.h5', key='stage', mode='w') ###Output _____no_output_____ ###Markdown Small note on the constrained ensembleHere we have constrained against anthropogenic induced warming. However, if any natural forcings added to FaIR are biased high or low (we know in general they are biased high relative to what optimal fingerprinting suggests, since the natural coefficients within the global warming index calculation here tend to be $<<1$), the resulting temperature output will also be biased high or low. This happens in the `SSP-simulations` notebook: the full-forcing 2010-2019 temperature change over a 1850-1900 baseline is roughly 0.07 K higher than the anthropogenic-forcing only temperature change. little tool to help visualise the 2d distribution (takes a little while)Each contour level represents an increase in likelihood of 0.1*quite memory intensive* ###Code select_dataset = 'CW' warming_level = np.load('../../aux/output-data/global-warming-index/results/T_2010-2019_'+select_dataset+'.npy') warming_rate = np.load('../../aux/output-data/global-warming-index/results/dT_2010-2019_'+select_dataset+'.npy') level_bins = np.arange(-0.2,1.8,0.01) rate_bins = np.arange(-0.05,0.1,0.001) AWI_binned = sp.stats.binned_statistic_2d(warming_level,warming_rate,None,'count',bins=[level_bins,rate_bins]) fig = plt.figure(figsize=(10,10)) gs = fig.add_gridspec(5,5,hspace=0,wspace=0) joint_ax = fig.add_subplot(gs[1:,:4]) margx_ax = fig.add_subplot(gs[0,:4]) margy_ax = fig.add_subplot(gs[1:,4]) joint_ax.contourf((AWI_binned.x_edge[:-1]+AWI_binned.x_edge[1:])/2, (AWI_binned.y_edge[:-1]+AWI_binned.y_edge[1:])/2, AWI_binned.statistic.T/AWI_binned.statistic.max(), levels=np.linspace(0,1,11), cmap='binary') margx_ax.hist(warming_level,bins=level_bins,color='k',histtype='step') margy_ax.hist(warming_rate,bins=rate_bins,color='k',histtype='step',orientation='horizontal') [a.set_xlim(0,1.5) for a in [joint_ax,margx_ax]] [a.set_ylim(0,0.08) for a in [joint_ax,margy_ax]] joint_ax.set_xlabel('2010-2019 level of anthropogenic warming / K') joint_ax.set_ylabel('2010-2019 rate of anthropogenic warming / K year$^{-1}$') margx_ax.set_axis_off() margy_ax.set_axis_off() ###Output _____no_output_____ ###Markdown Appendix I. Natural warming indexHere we calculate the estimated natural contributions to the present level of warming, for comparison with the all-forcing CONSTRAINED ensemble. ###Code ## get temperature timeseries nat_temps = xr.open_dataarray('../../aux/output-data/global-warming-index/nat_temperature.nc',chunks={'forcing_mem':100}) nat_T_2010_2019 = (nat_temps.sel(time=slice('2010','2019')).mean('time')-nat_temps.sel(time=slice('1850','1900')).mean('time')).astype(np.single) ## load scaling coefficients and compute level/rate (very memory intensive) # 500 million member subsamples to start = 2GB on disk per quantity sub_size = int(5e8) subsamples={} ## HadCRUT5 subsample subsamples['HadCRUT5_obs_mem'] = np.random.choice(5000*102*18*200,sub_size) ## HadCRUT4 subsample subsamples['HadCRUT4_obs_mem'] = np.random.choice(5000*102*18*100,sub_size) ## CW subsample subsamples['CW_obs_mem'] = np.random.choice(5000*102*18*99,sub_size) for select_dataset in ['HadCRUT5','HadCRUT4','NOAA','GISTEMP','CW','BERKELEY']: nat_coefs = (xr.open_dataarray('../../aux/output-data/global-warming-index/nat_coefs_forc_'+select_dataset+'.nc',chunks={'forcing_mem':100}).astype(np.single)+\ xr.open_dataarray('../../aux/output-data/global-warming-index/nat_coefs_IV_'+select_dataset+'.nc',chunks={'forcing_mem':100}).astype(np.single)) obsv_unc_source = nat_coefs.dims[0] np.save('../../aux/output-data/global-warming-index/results/T_2010-2019_'+select_dataset+'_nat.npy',(nat_T_2010_2019*nat_coefs).values.flatten()[subsamples[obsv_unc_source]]) ###Output _____no_output_____ ###Markdown mean natural contribution to the present-day warming level ###Code np.mean([np.load(x).mean() for x in glob.glob('../../aux/output-data/global-warming-index/results/T*nat.npy')]) ###Output _____no_output_____
examples/Example Low D Spike Based.ipynb
###Markdown 1D spiking neuron ###Code patterns=array([10]) pre=neurons.poisson_pattern(patterns) pre.time_between_patterns=2 pre.save_spikes_begin=0.0 pre.save_spikes_end=10.0 sim=simulation(10,dt=0.0001) %time run_sim(sim,[pre],[]) pre.plot_spikes() ###Output ('Time Elapsed...', '0.01 s') CPU times: user 8.84 ms, sys: 9.26 ms, total: 18.1 ms Wall time: 18 ms ###Markdown spike counts per second ###Code spike_counts(arange(0,10+1),pre.saved_spikes) ###Output _____no_output_____ ###Markdown 1D Non-constant rates ###Code pre=neurons.poisson_pattern([5,50], sequential=True, ) pre.time_between_patterns=2 pre.save_spikes_begin=0.0 pre.save_spikes_end=10.0 sim=simulation(10,dt=0.0001) %time run_sim(sim,[pre],[]) pre.plot_spikes() title('Oops! This is two neurons!') print(spike_counts(arange(0,10+1),pre.saved_spikes)) pre=neurons.poisson_pattern([5,50], shape=(2,1), sequential=True, ) pre.time_between_patterns=2 pre.save_spikes_begin=0.0 pre.save_spikes_end=10.0 sim=simulation(10,dt=0.0001) %time run_sim(sim,[pre],[]) pre.plot_spikes() ###Output ('Time Elapsed...', '0.01 s') CPU times: user 8.97 ms, sys: 8.77 ms, total: 17.7 ms Wall time: 17.6 ms ###Markdown 1D SRM0 neuron ###Code pre=neurons.poisson_pattern([10]) post=neurons.srm0(1) c=connection(pre,post,[1,1]) sim=simulation(10,dt=0.0001) sim.monitor(post,['u',],0.001) run_sim(sim,[pre,post],[c]) sim.monitors['u'].array() m=sim.monitors['u'] m.plot() ###Output _____no_output_____ ###Markdown Checking the effect connection strength ###Code pre=neurons.poisson_pattern([10]) post=neurons.srm0(1) c=connection(pre,post,[10,10]) sim=simulation(10,dt=0.0001) sim.monitor(post,['u',],0.001) run_sim(sim,[pre,post],[c]) m=sim.monitors['u'] m.plot() mean(m.array()) ###Output _____no_output_____ ###Markdown Running many different connection strengths ###Code w_arr=linspace(1,100,100) print(w_arr) mean_arr=[] rate=10 for w in w_arr: pre=neurons.poisson_pattern([rate]) post=neurons.srm0(1) c=connection(pre,post,[w,w]) sim=simulation(10,dt=0.0001) sim.monitor(post,['u',],0.001) run_sim(sim,[pre,post],[c],print_time=False) u=sim.monitors['u'].array() mean_arr.append(mean(u)) plot(w_arr,mean_arr,'o') xlabel('Connection Strength') ylabel('Mean $u$') title('Input Rate %.1f' % rate) mean_arr=[] rate=30 for w in w_arr: pre=neurons.poisson_pattern([rate]) post=neurons.srm0(1) c=connection(pre,post,[w,w]) sim=simulation(10,dt=0.0001) sim.monitor(post,['u',],0.001) run_sim(sim,[pre,post],[c],print_time=False) u=sim.monitors['u'].array() mean_arr.append(mean(u)) plot(w_arr,mean_arr,'o') xlabel('Connection Strength') ylabel('Mean $u$') title('Input Rate %.1f' % rate) ###Output _____no_output_____ ###Markdown Can you figure out an equation for the mean $u$ for a given connection strength and input rate? 2D Spiking Neuron ###Code pre=neurons.poisson_pattern([[10,20],[50,10]], sequential=True, verbose=True ) pre.time_between_patterns=2 pre.save_spikes_begin=0.0 pre.save_spikes_end=10.0 post=neurons.srm0(1) c=connection(pre,post,[1,1]) sim=simulation(10,dt=0.0001) sim.monitor(post,['u',],0.001) run_sim(sim,[pre,post],[c]) figure() pre.plot_spikes() figure() m=sim.monitors['u'] m.plot() ###Output sequential New pattern 0 10.0 20.0 Time to next pattern: 2.000000 sequential New pattern 1 50.0 10.0 Time to next pattern: 4.000000 sequential New pattern 0 10.0 20.0 Time to next pattern: 6.000000 sequential New pattern 1 50.0 10.0 Time to next pattern: 8.000000 sequential New pattern 0 10.0 20.0 Time to next pattern: 10.000000 sequential New pattern 1 50.0 10.0 Time to next pattern: 12.000000 ('Time Elapsed...', '0.09 s')
SMP200.ipynb
###Markdown Simulation Share Match Plancalcul du gain genere par le Share Match Plan pendant 2 ans ###Code # Inputs montantParMois = 200 nbMois = 24 import matplotlib.pyplot as plt import numpy as np from random import randrange ##################### # Functions # ##################### def getY(x): y = [] entryValue = 37 randRg = 15 xOffset1 = 8 xOffset2 = 16 xOffset3 = 24 yMultip1 = 0.8 yMultip2 = 0.5 yMultip3 = 0.4 yMultip4 = 0.5 for i in range(len(x)): if(x[i] < xOffset1): y.append(entryValue*yMultip1*(1+randrange(-randRg, randRg)*0.01)) elif(xOffset1 <= x[i] and x[i] < xOffset2): y.append(entryValue*yMultip2*(1+randrange(-randRg, randRg)*0.01)) elif(xOffset2 <= x[i] and x[i] < xOffset3): y.append(entryValue*yMultip3*(1+randrange(-randRg, randRg)*0.01)) else: y.append(entryValue*yMultip4*(1+randrange(-randRg, randRg)*0.01)) return y def calculateStock(y, entrySum): stock = 0 nbOfActions = 0 for elem in y: prime = entrySum/(elem*2) nbOfActions += entrySum/elem + prime stock = nbOfActions*y[-1] return stock ##################### # prepare vectors # ##################### x = np.linspace(1, nbMois, nbMois) y = getY(x) stock = calculateStock(y, montantParMois) ###Output _____no_output_____ ###Markdown Cours d'actions AMSLa fonction d'evolution du cours d'action d'amadeus pendant les 2 prochaines annees par mois : ###Code ##################### # Plot # ##################### plt.plot(x, y) plt.show() ###Output _____no_output_____ ###Markdown Gain generer : ###Code print("Montant investit : "+str(montantParMois*12)) print("Gains net : "+str(stock-montantParMois*12)) print("Total apres "+str(nbMois/12)+" ans :"+str(stock)) ###Output Montant investit : 2400 Gains net : 4179.772279902058 Total apres 2.0 ans :6579.772279902058 ###Markdown Simulation Share Match Plancalcul du gain genere par le Share Match Plan pendant 2 ans ###Code # Inputs montantParMois = 200 nbMois = 24 import matplotlib.pyplot as plt import numpy as np from random import randrange ##################### # Functions # ##################### def getY(x): y = [] entryValue = 37 randRg = 15 xOffset1 = 8 xOffset2 = 16 xOffset3 = 24 yMultip1 = 0.8 yMultip2 = 0.5 yMultip3 = 0.4 yMultip4 = 0.5 for i in range(len(x)): if(x[i] < xOffset1): y.append(entryValue*yMultip1*(1+randrange(-randRg, randRg)*0.01)) elif(xOffset1 <= x[i] and x[i] < xOffset2): y.append(entryValue*yMultip2*(1+randrange(-randRg, randRg)*0.01)) elif(xOffset2 <= x[i] and x[i] < xOffset3): y.append(entryValue*yMultip3*(1+randrange(-randRg, randRg)*0.01)) else: y.append(entryValue*yMultip4*(1+randrange(-randRg, randRg)*0.01)) return y def calculateStock(y, entrySum): stock = 0 nbOfActions = 0 for elem in y: prime = entrySum/(elem*2) nbOfActions += entrySum/elem + prime stock = nbOfActions*y[-1] return stock ##################### # prepare vectors # ##################### x = np.linspace(1, nbMois, nbMois) y = getY(x) stock = calculateStock(y, montantParMois) ###Output _____no_output_____ ###Markdown Cours d'actions AMSLa fonction d'evolution du cours d'action d'amadeus pendant les 2 prochaines annees par mois : ###Code ##################### # Plot # ##################### plt.plot(x, y) plt.show() ###Output _____no_output_____ ###Markdown Gain genere : ###Code print("Montant investit : "+str(montantParMois*24)) print("Gains net : "+str(stock-montantParMois*24)) print("Total apres "+str(nbMois/12)+" ans :"+str(stock)) ###Output Montant investit : 4800 Gains net : 2026.0120035934833 Total apres 2.0 ans :6826.012003593483
notebooks/AI in Technology - AWS DeepComposer Samples/reinvent-labs/lab-2/GAN.ipynb
###Markdown Introduction This tutorial is a brief introduction to music generation using **Generative Adversarial Networks** (**GAN**s). The goal of this tutorial is to train a machine learning model using a dataset of Bach compositions so that the model learns to add accompaniments to a single track input melody. In other words, if the user provides a single piano track of a song such as "twinkle twinkle little star", the GAN model would add three other piano tracks to make the music sound more Bach-inspired.The proposed algorithm consists of two competing networks: a generator and a critic (discriminator). A generator is a deep neural network that learns to create new synthetic data that resembles the distribution of the dataset on which it was trained. A critic is another deep neural network that is trained to differentiate between real and synthetic data. The generator and the critic are trained in alternating cycles such that the generator learns to produce more and more realistic data (Bach-like music in this use case) while the critic iteratively gets better at learning to differentiate real data (Bach music) from the synthetic ones.As a result, the quality of music produced by the generator gets more and more realistic with time. ![High level WGAN-GP architecture](images/dgan.png "WGAN-GP architecture") DependenciesFirst, let's import all of the python packages we will use throughout the tutorial. ###Code # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of # this software and associated documentation files (the "Software"), to deal in # the Software without restriction, including without limitation the rights to # use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of # the Software, and to permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS # FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR # COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER # IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN # CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Create the environment import subprocess print("Please wait, while the required packages are being installed...") subprocess.call(['./requirements.sh'], shell=True) print("All the required packages are installed successfully...") # IMPORTS import os import numpy as np from PIL import Image import logging import pypianoroll import scipy.stats import pickle import music21 from IPython import display import matplotlib.pyplot as plt # Configure Tensorflow import tensorflow as tf print(tf.__version__) tf.logging.set_verbosity(tf.logging.ERROR) tf.enable_eager_execution() # Use this command to make a subset of GPUS visible to the jupyter notebook. os.environ['CUDA_VISIBLE_DEVICES'] = '0' os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # Utils library for plotting, loading and saving midi among other functions from utils import display_utils, metrics_utils, path_utils, inference_utils, midi_utils LOGGER = logging.getLogger("gan.train") %matplotlib inline ###Output _____no_output_____ ###Markdown Configuration Here we configure paths to retrieve our dataset and save our experiments. ###Code root_dir = './Experiments' # Directory to save checkpoints model_dir = os.path.join(root_dir,'2Bar') # JSP: 229, Bach: 19199 # Directory to save pianorolls during training train_dir = os.path.join(model_dir, 'train') # Directory to save checkpoint generated during training check_dir = os.path.join(model_dir, 'preload') # Directory to save midi during training sample_dir = os.path.join(model_dir, 'sample') # Directory to save samples generated during inference eval_dir = os.path.join(model_dir, 'eval') os.makedirs(train_dir, exist_ok=True) os.makedirs(eval_dir, exist_ok=True) os.makedirs(sample_dir, exist_ok=True) ###Output _____no_output_____ ###Markdown Data Preparation Dataset summaryIn this tutorial, we use the [`JSB-Chorales-dataset`](http://www-etud.iro.umontreal.ca/~boulanni/icml2012), comprising 229 chorale snippets. A chorale is a hymn that is usually sung with a single voice playing a simple melody and three lower voices providing harmony. In this dataset, these voices are represented by four piano tracks.Let's listen to a song from this dataset. ###Code display_utils.playmidi('./original_midi/MIDI-0.mid') ###Output _____no_output_____ ###Markdown Data format - piano roll For the purpose of this tutorial, we represent music from the JSB-Chorales dataset in the piano roll format.**Piano roll** is a discrete representation of music which is intelligible by many machine learning algorithms. Piano rolls can be viewed as a two-dimensional grid with "Time" on the horizontal axis and "Pitch" on the vertical axis. A one or zero in any particular cell in this grid indicates if a note was played or not at that time for that pitch.Let us look at a few piano rolls in our dataset. In this example, a single piano roll track has 32 discrete time steps and 128 pitches. We see four piano rolls here, each one representing a separate piano track in the song. You might notice this representation looks similar to an image. While the sequence of notes is often the natural way that people view music, many modern machine learning models instead treat music as images and leverage existing techniques within the computer vision domain. You will see such techniques used in our architecture later in this tutorial. **Why 32 time steps?**For the purpose of this tutorial, we sample two non-empty bars (https://en.wikipedia.org/wiki/Bar_(music)) from each song in the JSB-Chorales dataset. A **bar** (or **measure**) is a unit of composition and contains four beats for songs in our particular dataset (our songs are all in 4/4 time) :We’ve found that using a resolution of four time steps per beat captures enough of the musical detail in this dataset.This yields...$$ \frac{4\;timesteps}{1\;beat} * \frac{4\;beats}{1\;bar} * \frac{2\;bars}{1} = 32\;timesteps $$Let us now load our dataset as a numpy array. Our dataset comprises 229 samples of 4 tracks (all tracks are piano). Each sample is a 32 time-step snippet of a song, so our dataset has a shape of...(num_samples, time_steps, pitch_range, tracks) = (229, 32, 128, 4). ###Code training_data = np.load('./dataset/train.npy') print(training_data.shape) ###Output _____no_output_____ ###Markdown Let's see a sample of the data we'll feed into our model. The four graphs represent the four tracks. ###Code display_utils.show_pianoroll(training_data) ###Output _____no_output_____ ###Markdown Load data We now create a Tensorflow dataset object from our numpy array to feed into our model. The dataset object helps us feed batches of data into our model. A batch is a subset of the data that is passed through the deep learning network before the weights are updated. Batching data is necessary in most training scenarios as our training environment might not be able to load the entire dataset into memory at once. ###Code #Number of input data samples in a batch BATCH_SIZE = 64 #Shuffle buffer size for shuffling data SHUFFLE_BUFFER_SIZE = 1000 #Preloads PREFETCH_SIZE batches so that there is no idle time between batches PREFETCH_SIZE = 4 def prepare_dataset(filename): """Load the samples used for training.""" data = np.load(filename) data = np.asarray(data, dtype=np.float32) # {-1, 1} print('data shape = {}'.format(data.shape)) dataset = tf.data.Dataset.from_tensor_slices(data) dataset = dataset.shuffle(SHUFFLE_BUFFER_SIZE).repeat() dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) dataset = dataset.prefetch(PREFETCH_SIZE) return dataset dataset = prepare_dataset('./dataset/train.npy') ###Output _____no_output_____ ###Markdown Model architectureIn this section, we will walk through the architecture of the proposed GAN.The model consists of two networks, a generator and a critic. These two networks work in a tight loop as following:* Generator: 1. The generator takes in a batch of single-track piano rolls (melody) as the input and generates a batch of multi-track piano rolls as the output by adding accompaniments to each of the input music tracks. 2. The critic then takes these generated music tracks and predicts how far it deviates from the real data present in your training dataset. 3. This feedback from the critic is used by the generator to update its weights.* Critic: As the generator gets better at creating better music accompaniments using the feedback from the critic, the critic needs to be retrained as well. 1. Train the critic with the music tracks just generated by the generator as fake inputs and an equivalent number of songs from the original dataset as the real input. * Alternate between training these two networks until the model converges and produces realistic music, beginning with the critic on the first iteration.We use a special type of GAN called the **Wasserstein GAN with Gradient Penalty** (or **WGAN-GP**) to generate music. While the underlying architecture of a WGAN-GP is very similar to vanilla variants of GAN, WGAN-GPs help overcome some of the commonly seen defects in GANs such as the vanishing gradient problem and mode collapse (see appendix for more details).Note our "critic" network is more generally called a "discriminator" network in the more general context of vanilla GANs. Generator The generator is adapted from the U-Net architecture (a popular CNN that is used extensively in the computer vision domain), consisting of an “encoder” that maps the single track music data (represented as piano roll images) to a relatively lower dimensional “latent space“ and a ”decoder“ that maps the latent space back to multi-track music data.Here are the inputs provided to the generator:**Single-track piano roll input**: A single melody track of size (32, 128, 1) => (TimeStep, NumPitches, NumTracks) is provided as the input to the generator. **Latent noise vector**: A latent noise vector z of dimension (2, 8, 512) is also passed in as input and this is responsible for ensuring that there is a distinctive flavor to each output generated by the generator, even when the same input is provided.Notice from the figure below that the encoding layers of the generator on the left side and decoder layer on on the right side are connected to create a U-shape, thereby giving the name U-Net to this architecture. In this implementation, we build the generator following a simple four-level Unet architecture by combining `_conv2d`s and `_deconv2d`, where `_conv2d` compose the contracting path and `_deconv2d` forms the expansive path. ###Code def _conv2d(layer_input, filters, f_size=4, bn=True): """Generator Basic Downsampling Block""" d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input) d = tf.keras.layers.LeakyReLU(alpha=0.2)(d) if bn: d = tf.keras.layers.BatchNormalization(momentum=0.8)(d) return d def _deconv2d(layer_input, pre_input, filters, f_size=4, dropout_rate=0): """Generator Basic Upsampling Block""" u = tf.keras.layers.UpSampling2D(size=2)(layer_input) u = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=1, padding='same')(u) u = tf.keras.layers.BatchNormalization(momentum=0.8)(u) u = tf.keras.layers.ReLU()(u) if dropout_rate: u = tf.keras.layers.Dropout(dropout_rate)(u) u = tf.keras.layers.Concatenate()([u, pre_input]) return u def build_generator(condition_input_shape=(32, 128, 1), filters=64, instruments=4, latent_shape=(2, 8, 512)): """Buld Generator""" c_input = tf.keras.layers.Input(shape=condition_input_shape) z_input = tf.keras.layers.Input(shape=latent_shape) d1 = _conv2d(c_input, filters, bn=False) d2 = _conv2d(d1, filters * 2) d3 = _conv2d(d2, filters * 4) d4 = _conv2d(d3, filters * 8) d4 = tf.keras.layers.Concatenate(axis=-1)([d4, z_input]) u4 = _deconv2d(d4, d3, filters * 4) u5 = _deconv2d(u4, d2, filters * 2) u6 = _deconv2d(u5, d1, filters) u7 = tf.keras.layers.UpSampling2D(size=2)(u6) output = tf.keras.layers.Conv2D(instruments, kernel_size=4, strides=1, padding='same', activation='tanh')(u7) # 32, 128, 4 generator = tf.keras.models.Model([c_input, z_input], output, name='Generator') return generator ###Output _____no_output_____ ###Markdown Let us now dive into each layer of the generator to see the inputs/outputs at each layer. ###Code # Models generator = build_generator() generator.summary() ###Output _____no_output_____ ###Markdown Critic (Discriminator) The goal of the critic is to provide feedback to the generator about how realistic the generated piano rolls are, so that the generator can learn to produce more realistic data. The critic provides this feedback by outputting a scalar that represents how “real” or “fake” a piano roll is.Since the critic tries to classify data as “real” or “fake”, it is not very different from commonly used binary classifiers. We use a simple architecture for the critic, composed of four convolutional layers and a dense layer at the end. ###Code def _build_critic_layer(layer_input, filters, f_size=4): """ This layer decreases the spatial resolution by 2: input: [batch_size, in_channels, H, W] output: [batch_size, out_channels, H/2, W/2] """ d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input) # Critic does not use batch-norm d = tf.keras.layers.LeakyReLU(alpha=0.2)(d) return d def build_critic(pianoroll_shape=(32, 128, 4), filters=64): """WGAN critic.""" condition_input_shape = (32,128,1) groundtruth_pianoroll = tf.keras.layers.Input(shape=pianoroll_shape) condition_input = tf.keras.layers.Input(shape=condition_input_shape) combined_imgs = tf.keras.layers.Concatenate(axis=-1)([groundtruth_pianoroll, condition_input]) d1 = _build_critic_layer(combined_imgs, filters) d2 = _build_critic_layer(d1, filters * 2) d3 = _build_critic_layer(d2, filters * 4) d4 = _build_critic_layer(d3, filters * 8) x = tf.keras.layers.Flatten()(d4) logit = tf.keras.layers.Dense(1)(x) critic = tf.keras.models.Model([groundtruth_pianoroll,condition_input], logit, name='Critic') return critic # Create the Discriminator critic = build_critic() critic.summary() # View discriminator architecture. ###Output _____no_output_____ ###Markdown TrainingWe train our models by searching for model parameters which optimize an objective function. For our WGAN-GP, we have special loss functions that we minimize as we alternate between training our generator and critic networks:*Generator Loss:** We use the Wasserstein (Generator) loss function which is negative of the Critic Loss function. The generator is trained to bring the generated pianoroll as close to the real pianoroll as possible. * $\frac{1}{m} \sum_{i=1}^{m} -D_w(G(z^{i}|c^{i})|c^{i})$*Critic Loss:** We begin with the Wasserstein (Critic) loss function designed to maximize the distance between the real piano roll distribution and generated (fake) piano roll distribution. * $\frac{1}{m} \sum_{i=1}^{m} [D_w(G(z^{i}|c^{i})|c^{i}) - D_w(x^{i}|c^{i})]$* We add a gradient penalty loss function term designed to control how the gradient of the critic with respect to its input behaves. This makes optimization of the generator easier. * $\frac{1}{m} \sum_{i=1}^{m}(\lVert \nabla_{\hat{x}^i}D_w(\hat{x}^i|c^{i}) \rVert_2 - 1)^2 $ ###Code # Define the different loss functions def generator_loss(critic_fake_output): """ Wasserstein GAN loss (Generator) -D(G(z|c)) """ return -tf.reduce_mean(critic_fake_output) def wasserstein_loss(critic_real_output, critic_fake_output): """ Wasserstein GAN loss (Critic) D(G(z|c)) - D(x|c) """ return tf.reduce_mean(critic_fake_output) - tf.reduce_mean( critic_real_output) def compute_gradient_penalty(critic, x, fake_x): c = tf.expand_dims(x[..., 0], -1) batch_size = x.get_shape().as_list()[0] eps_x = tf.random.uniform( [batch_size] + [1] * (len(x.get_shape()) - 1)) # B, 1, 1, 1, 1 inter = eps_x * x + (1.0 - eps_x) * fake_x with tf.GradientTape() as g: g.watch(inter) disc_inter_output = critic((inter,c), training=True) grads = g.gradient(disc_inter_output, inter) slopes = tf.sqrt(1e-8 + tf.reduce_sum( tf.square(grads), reduction_indices=tf.range(1, grads.get_shape().ndims))) gradient_penalty = tf.reduce_mean(tf.square(slopes - 1.0)) return gradient_penalty ###Output _____no_output_____ ###Markdown With our loss functions defined, we associate them with Tensorflow optimizers to define how our model will search for a good set of model parameters. We use the *Adam* algorithm, a commonly used general-purpose optimizer. We also set up checkpoints to save our progress as we train. ###Code # Setup Adam optimizers for both G and D generator_optimizer = tf.keras.optimizers.Adam(1e-3, beta_1=0.5, beta_2=0.9) critic_optimizer = tf.keras.optimizers.Adam(1e-3, beta_1=0.5, beta_2=0.9) # We define our checkpoint directory and where to save trained checkpoints ckpt = tf.train.Checkpoint(generator=generator, generator_optimizer=generator_optimizer, critic=critic, critic_optimizer=critic_optimizer) ckpt_manager = tf.train.CheckpointManager(ckpt, check_dir, max_to_keep=5) ###Output _____no_output_____ ###Markdown Now we define the `generator_train_step` and `critic_train_step` functions, each of which performs a single forward pass on a batch and returns the corresponding loss. ###Code @tf.function def generator_train_step(x, condition_track_idx=0): ############################################ #(1) Update G network: maximize D(G(z|c)) ############################################ # Extract condition track to make real batches pianoroll c = tf.expand_dims(x[..., condition_track_idx], -1) # Generate batch of latent vectors z = tf.random.truncated_normal([BATCH_SIZE, 2, 8, 512]) with tf.GradientTape() as tape: fake_x = generator((c, z), training=True) fake_output = critic((fake_x,c), training=False) # Calculate Generator's loss based on this generated output gen_loss = generator_loss(fake_output) # Calculate gradients for Generator gradients_of_generator = tape.gradient(gen_loss, generator.trainable_variables) # Update Generator generator_optimizer.apply_gradients( zip(gradients_of_generator, generator.trainable_variables)) return gen_loss @tf.function def critic_train_step(x, condition_track_idx=0): ############################################################################ #(2) Update D network: maximize (D(x|c)) + (1 - D(G(z|c))|c) + GradientPenality() ############################################################################ # Extract condition track to make real batches pianoroll c = tf.expand_dims(x[..., condition_track_idx], -1) # Generate batch of latent vectors z = tf.random.truncated_normal([BATCH_SIZE, 2, 8, 512]) # Generated fake pianoroll fake_x = generator((c, z), training=False) # Update critic parameters with tf.GradientTape() as tape: real_output = critic((x,c), training=True) fake_output = critic((fake_x,c), training=True) critic_loss = wasserstein_loss(real_output, fake_output) # Caculate the gradients from the real and fake batches grads_of_critic = tape.gradient(critic_loss, critic.trainable_variables) with tf.GradientTape() as tape: gp_loss = compute_gradient_penalty(critic, x, fake_x) gp_loss *= 10.0 # Calculate the gradients penalty from the real and fake batches grads_gp = tape.gradient(gp_loss, critic.trainable_variables) gradients_of_critic = [g + ggp for g, ggp in zip(grads_of_critic, grads_gp) if ggp is not None] # Update Critic critic_optimizer.apply_gradients( zip(gradients_of_critic, critic.trainable_variables)) return critic_loss + gp_loss ###Output _____no_output_____ ###Markdown Before we begin training, let's define some training configuration parameters and prepare to monitor important quantities. Here we log the losses and metrics which we can use to determine when to stop training. Consider coming back here to tweak these parameters and explore how your model responds. ###Code # We use load_melody_samples() to load 10 input data samples from our dataset into sample_x # and 10 random noise latent vectors into sample_z sample_x, sample_z = inference_utils.load_melody_samples(n_sample=10) # Number of iterations to train for iterations = 1000 # Update critic n times per generator update n_dis_updates_per_gen_update = 5 # Determine input track in sample_x that we condition on condition_track_idx = 0 sample_c = tf.expand_dims(sample_x[..., condition_track_idx], -1) ###Output _____no_output_____ ###Markdown Let us now train our model! ###Code # Clear out any old metrics we've collected metrics_utils.metrics_manager.initialize() # Keep a running list of various quantities: c_losses = [] g_losses = [] # Data iterator to iterate over our dataset it = iter(dataset) for iteration in range(iterations): # Train critic for _ in range(n_dis_updates_per_gen_update): c_loss = critic_train_step(next(it)) # Train generator g_loss = generator_train_step(next(it)) # Save Losses for plotting later c_losses.append(c_loss) g_losses.append(g_loss) display.clear_output(wait=True) fig = plt.figure(figsize=(15, 5)) line1, = plt.plot(range(iteration+1), c_losses, 'r') line2, = plt.plot(range(iteration+1), g_losses, 'k') plt.xlabel('Iterations') plt.ylabel('Losses') plt.legend((line1, line2), ('C-loss', 'G-loss')) display.display(fig) plt.close(fig) # Output training stats print('Iteration {}, c_loss={:.2f}, g_loss={:.2f}'.format(iteration, c_loss, g_loss)) # Save checkpoints, music metrics, generated output if iteration < 100 or iteration % 50 == 0 : # Check how the generator is doing by saving G's samples on fixed_noise fake_sample_x = generator((sample_c, sample_z), training=False) metrics_utils.metrics_manager.append_metrics_for_iteration(fake_sample_x.numpy(), iteration) if iteration % 50 == 0: # Save the checkpoint to disk. ckpt_manager.save(checkpoint_number=iteration) fake_sample_x = fake_sample_x.numpy() # plot the pianoroll display_utils.plot_pianoroll(iteration, sample_x[:4], fake_sample_x[:4], save_dir=train_dir) # generate the midi destination_path = path_utils.generated_midi_path_for_iteration(iteration, saveto_dir=sample_dir) midi_utils.save_pianoroll_as_midi(fake_sample_x[:4], destination_path=destination_path) ###Output _____no_output_____ ###Markdown We have started training!When using the Wasserstein loss function, we should train the critic to converge to ensure that the gradients for the generator update are accurate. This is in contrast to a standard GAN, where it is important not to let the critic get too strong, to avoid vanishing gradients.Therefore, using the Wasserstein loss removes one of the key difficulties of training GANs—how to balance the training of the discriminator and generator. With WGANs, we can simply train the critic several times between generator updates, to ensure it is close to convergence. A typical ratio used is five critic updates to one generator update. "Babysitting" the learning processGiven that training these models can be an investment in time and resources, we must to continuously monitor training in order to catch and address anomalies if/when they occur. Here are some things to look out for:**What should the losses look like?**The adversarial learning process is highly dynamic and high-frequency oscillations are quite common. However if either loss (critic or generator) skyrockets to huge values, plunges to 0, or get stuck on a single value, there is likely an issue somewhere.**Is my model learning?**- Monitor the critic loss and other music quality metrics (if applicable). Are they following the expected trajectories?- Monitor the generated samples (piano rolls). Are they improving over time? Do you see evidence of mode collapse? Have you tried listening to your samples?**How do I know when to stop?**- If the samples meet your expectations- Critic loss no longer improving- The expected value of the musical quality metrics converge to the corresponding expected value of the same metric on the training data How to measure sample quality during training Typically, when training any sort of neural networks, it is standard practice to monitor the value of the loss function throughout the duration of the training. The critic loss in WGANs has been found to correlate well with sample quality.While standard mechanisms exist for evaluating the accuracy of more traditional models like classifiers or regressors, evaluating generative models is an active area of research. Within the domain of music generation, this hard problem is even less well-understood.To address this, we take high-level measurements of our data and show how well our model produces music that aligns with those measurements. If our model produces music which is close to the mean value of these measurements for our training dataset, our music should match on general “shape”.We’ll look at three such measurements:- **Empty bar rate:** The ratio of empty bars to total number of bars.- **Pitch histogram distance:** A metric that captures the distribution and position of pitches.- **In Scale Ratio:** Ratio of the number of notes that are in C major key, which is a common key found in music, to the total number of notes. Evaluate resultsNow that we have finished training, let's find out how we did. We will analyze our model in several ways:1. Examine how the generator and critic losses changed while training2. Understand how certain musical metrics changed while training3. Visualize generated piano roll output for a fixed input at every iteration and create a video Let us first restore our last saved checkpoint. If you did not complete training but still want to continue with a pre-trained version, set `TRAIN = False`. ###Code ckpt = tf.train.Checkpoint(generator=generator) ckpt_manager = tf.train.CheckpointManager(ckpt, check_dir, max_to_keep=5) ckpt.restore(ckpt_manager.latest_checkpoint).expect_partial() print('Latest checkpoint {} restored.'.format(ckpt_manager.latest_checkpoint)) ###Output _____no_output_____ ###Markdown Plot losses ###Code display_utils.plot_loss_logs(g_losses, c_losses, figsize=(15, 5), smoothing=0.01) ###Output _____no_output_____ ###Markdown Observe how the critic loss (C_loss in the graph) decays to zero as we train. In WGAN-GPs, the critic loss decreases (almost) monotonically as you train. Plot metrics ###Code metrics_utils.metrics_manager.set_reference_metrics(training_data) metrics_utils.metrics_manager.plot_metrics() ###Output _____no_output_____ ###Markdown Each row here corresponds to a different music quality metric and each column denotes an instrument track. Observe how the expected value of the different metrics (blue scatter) approach the corresponding training set expected values (red) as the number of iterations increase. You might expect to see diminishing returns as the model converges. Generated samples during trainingThe function below helps you probe intermediate samples generated in the training process. Remember that the conditioned input here is sampled from our training data. Let's start by listening to and observing a sample at iteration 0 and then iteration 100. Notice the difference! ###Code # Enter an iteration number (can be divided by 50) and listen to the midi at that iteration iteration = 50 midi_file = os.path.join(sample_dir, 'iteration-{}.mid'.format(iteration)) display_utils.playmidi(midi_file) # Enter an iteration number (can be divided by 50) and look at the generated pianorolls at that iteration iteration = 50 pianoroll_png = os.path.join(train_dir, 'sample_iteration_%05d.png' % iteration) display.Image(filename=pianoroll_png) ###Output _____no_output_____ ###Markdown Let's see how the generated piano rolls change with the number of iterations. ###Code from IPython.display import Video display_utils.make_training_video(train_dir) video_path = "movie.mp4" Video(video_path) ###Output _____no_output_____ ###Markdown Inference Generating accompaniment for custom inputCongratulations! You have trained your very own WGAN-GP to generate music. Let us see how our generator performs on a custom input.The function below generates a new song based on "Twinkle Twinkle Little Star". ###Code latest_midi = inference_utils.generate_midi(generator, eval_dir, input_midi_file='./input_twinkle_twinkle.mid') display_utils.playmidi(latest_midi) ###Output _____no_output_____ ###Markdown We can also take a look at the generated piano rolls for a certain sample, to see how diverse they are! ###Code inference_utils.show_generated_pianorolls(generator, eval_dir, input_midi_file='./input_twinkle_twinkle.mid') ###Output _____no_output_____ ###Markdown What's next? Using your own data (Optional) To create your own dataset you can extract the piano roll from MIDI data. An example of creating a piano roll from a MIDI file is given below ###Code import numpy as np from pypianoroll import Multitrack midi_data = Multitrack('./input_twinkle_twinkle.mid') tracks = [track.pianoroll for track in midi_data.tracks] sample = np.stack(tracks, axis=-1) print(sample.shape) ###Output _____no_output_____
.ipynb_checkpoints/NumPy-Arrays-and-Vectorized-Computation-checkpoint.ipynb
###Markdown NumPy = Numerical Python Numpy is an important package for numerical computing in Python. Most computational packages use numpy multidimensional array as the main structure to store and manipulate data. Numpy is large topic. In this lecture, **we cover**:- Fast vectorized array operations for data manipulation, cleaning, subsetting, filtering, transformation, and any other kinds of computation.- Popular methods on array object like sorting, unique, and set operations- Efficient descriptive statistic, and summarizing data- Merging and joining together datasets- Expressing conditional logic as array expression instead of loops- Groupd-wise data manipulation (aggregration, transformation, function application) ###Code import numpy as np ###Output _____no_output_____ ###Markdown The Numpy ndarray: A Multidimensional Array Object Creating ndarrays To create an array, use the *array* function. This function accept any sequence-like object and produces a new NumPy array containing the passed data. ###Code data1 = [1, 2, 3] array1 = np.array(data1) # data passed in is a list array1 data2 = (4, 5, 6) array2 = np.array(data2) # data passed in is a tuple array2 type(array2) ###Output _____no_output_____ ###Markdown Obviously, we can create a numpy array directly as follow: ###Code array3 = np.array([7, 8, 9]) array3 ###Output _____no_output_____ ###Markdown If we passed a nested sequences to the *array* function, a multidimensional array is created ###Code data4 = [[1, 2, 3], [4, 5, 6]] array4 = np.array(data4) array4 np.array([[1, 2, 3], [4, 5, 6]]) ###Output _____no_output_____ ###Markdown We see that *data4* is a list of 2 element where each element is a list of 3 elements. Thus, *array4* is a *2x3* array (or matrix). We can check the number of dimensions of an array and its shape using ###Code array4.ndim # array4 has 2 dimension array4.shape # array 2 rows and 3 columns, the shape is returned in a 2-d tuple ###Output _____no_output_____ ###Markdown Some useful functions for creating new special arrays ###Code # Create array of 0s np.zeros([2, 3]) # Create array of 1s np.ones([3, 3]) # Create an array of range np.arange(10, 100) # Create an identity matrix np.eye(4) ###Output _____no_output_____ ###Markdown Arithmetic with Numpy Arrays When numerical data are stored in numpy arrays, we can perform batch operations on data (like matrix operations in math) without writing any loops. We call this feature vectorization. Any arithmetic operations between equal-size arrays applies the operation element-wise. For example, we have ###Code my_list = [[1, 2, 3], [4, 5, 6]] ###Output _____no_output_____ ###Markdown Then, we want to create a new list where its elements are elements of my_list squared as ###Code [[i**2 for i in item ] for item in my_list] ###Output _____no_output_____ ###Markdown Using numpy array and vectorizaton we can do the same but much simpler as ###Code array = np.array(my_list) array array * array ###Output _____no_output_____ ###Markdown Other arithmetic operations ###Code array - array array ** 3 1 / array ###Output _____no_output_____ ###Markdown We can compare two arrays of the same shape element-wise. The result is a boolean array. ###Code array_2 = np.array([[2, 4, 0], [5, 1, 9]]) array_2 array array_2 > array ###Output _____no_output_____ ###Markdown Basic Indexing and Slicing For one dimensional numpy, slicing is similar to Python lists ###Code array = np.arange(100) array array[0] array[-2] array[:3] array[-3:] array[2:6] ###Output _____no_output_____ ###Markdown We should notice that array slices are **views** on the original array. This means that the data is not copied, and any modification to the view will be reflected in the source array. For example, ###Code array_slice = array[2:6] array_slice array_slice[0] = 99 array_slice array ###Output _____no_output_____ ###Markdown If we want a copy of a slice, we need to do it explicitly as ###Code array_slice_copied = array[-3:].copy() array_slice_copied array_slice_copied[:] = 99 array_slice_copied array ###Output _____no_output_____ ###Markdown For higher dimensional array, for example, 2-dimensional arrays, the elements at each indexc are no longer scalars but rather one-dimensional arrays. ###Code array_2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) array_2d array_2d[2] ###Output _____no_output_____ ###Markdown We can select an individual element by two ways: ###Code # Access recursively the element at row 1, column 1 array_2d[1][1] # Comma separated list array_2d[0, 2] ###Output _____no_output_____ ###Markdown Indexing with slices ###Code array_2d # Return the first row array_2d[0] # Return the first two row array_2d[:2] # Return the second column array_2d[:, 2] # Return the square submatrix at the upper right corner array_2d[:2, -2:] ###Output _____no_output_____ ###Markdown **Remember again, slice is a view. Modify a slice changes the orginal array.** ###Code array_2d array_2d[:2, :2] = 99 array_2d ###Output _____no_output_____ ###Markdown Comparison Operators ###Code array = np.array([1, 2, 3, 4, 5, 6]) array array > 3 array >= 3 array == 3 array != 3 ###Output _____no_output_____ ###Markdown Boolean Arrays ###Code a = np.array([[2, -7, 1], [-4, 3, 8], [5, 0, -6]]) a # Which number is positive ? a > 0 # How many negative number ? (a < 0).sum() # Are there any number equal to 0 ? (a == 0).any() # Are all value less than 8 ? (a < 8).all() ###Output _____no_output_____ ###Markdown Boolean Indexing ###Code a = np.array([-2, -1, 0, 1, 2]) # Boolean mask a > 0 # Pass the boolean mask to index a[a > 0] # Create a random 2 dimensional array b = np.random.randint(1, 10, (3, 3)) b # Index with a boolean mask b[b <= 4] # We can set values of an array with boolean mask b[b==9] = 0 b a = np.array([1, 3, 5, 7, 9]) a np.where(a > 4) ###Output _____no_output_____ ###Markdown Fancy Indexing If we want to access elements at non consecutinuous index of a numpy array, we use fancy indexing. For example, ###Code a = np.random.randint(0, 9, 10) a ###Output _____no_output_____ ###Markdown We retrieve elements at even indicies of the above array as ###Code a[[0, 2, 5, 6, 8]] ###Output _____no_output_____ ###Markdown Fancy indexing also works with multiple dimensions arrays ###Code b = np.arange(9).reshape(3,3) b ###Output _____no_output_____ ###Markdown To get the elements at specific locations, we pass in two tuples. The first one indicates the row indicies and the second one determines column indicies. ###Code # Get elements at the four corners of the array, the indicies of those position are (0, 0); (0, 2); (2, 0); (2, 2) row_indicies = (0, 0, 1, 2) column_indicies = (0, 2, 1, 1) b[row_indicies, column_indicies] ###Output _____no_output_____ ###Markdown We can combine fancy indexing with other indexing methods to get desired elements. ###Code # Simple + fancy b[1, [0, 2]] # Slicing + fancy b[[0, 2], -2:] # Boolean + fancy b[[True, False, True]][:, [0, 2]] ###Output _____no_output_____ ###Markdown We can create a new array by using fancy indexing ###Code b b[[0, 0, 1, 1, 2, 2]] ###Output _____no_output_____ ###Markdown We can modify data of array using fancy indexing ###Code b = np.arange(9).reshape(3, 3) b b[[0, 1], [2, 0]] = 99 b a = np.arange(10) a np.where(a > 3) ###Output _____no_output_____ ###Markdown Universal Function Array Arithmetic ###Code x = np.arange(5) print(x) print(x + 2) print(x - 2) print(x * 2) print(x / 2) print(x // 2) print(x ** 2) print(x % 2) ###Output [0 1 2 3 4] [2 3 4 5 6] [-2 -1 0 1 2] [0 2 4 6 8] [0. 0.5 1. 1.5 2. ] [0 0 1 1 2] [ 0 1 4 9 16] [0 1 0 1 0] ###Markdown Absolute value ###Code x = np.array([-2, -1, 0, 1, 2]) print(x) print(np.abs(x)) ###Output [-2 -1 0 1 2] [2 1 0 1 2] ###Markdown The numpy absolute function can work with complex numbers and return the magnitude of it. ###Code x = np.array([-1 + 1j, 2 - 2j, 3 + 4j]) print(x) print(np.abs(x)) ###Output [-1.+1.j 2.-2.j 3.+4.j] [1.41421356 2.82842712 5. ] ###Markdown Trigonometric functions ###Code x = np.linspace(-2 * np.pi, 2 * np.pi, 5) x y = np.sin(x) y import numpy as np import matplotlib.pyplot as plt plt.style.use("ggplot") x = np.linspace(-2 * np.pi, 2 * np.pi, 100) y = np.sin(x) plt.plot(x, y) ###Output _____no_output_____ ###Markdown Exponents and logarithms ###Code x = np.linspace(-2, 2, 100) y = np.exp(x) plt.plot(x, y) x = np.linspace(0.0001, 2, 1000) y = np.log(x) plt.plot(x, y) ###Output _____no_output_____ ###Markdown We can combine those functions to calculate complex math functions ###Code x = np.linspace(-5, 5, 1000) y = (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x)) plt.plot(x, y) ###Output _____no_output_____ ###Markdown Aggregations: Sum, Min, Max Summing the Values in an Array Given an array ###Code a = np.random.randn(5) a ###Output _____no_output_____ ###Markdown We can use the sum function of python or use the method of numpy as follow ###Code sum(a) a.sum() ###Output _____no_output_____ ###Markdown It is recommended to use the numpy version, because it is computed much more quickly ###Code big_array = np.random.randn(1000000) %time sum(big_array) %time big_array.sum() ###Output Wall time: 128 ms Wall time: 1.99 ms ###Markdown Min and Max Again, python has built in mix and max function. However, numpy version is better. ###Code a = np.random.randn(5) a a.min() a.max() ###Output _____no_output_____ ###Markdown Multi dimensional aggregates When we have two (or more) dimensionals array, we can choose which dimension to perform aggregration. ###Code a = np.random.randn(3, 4) a print(a.sum()) # sum all of the elements print(a.sum(axis = 0)) # sum on column print(a.sum(axis = 1)) # sum on row print(a.min()) print(a.min(axis = 0)) #column print(a.min(axis = 1)) #row print(a.max()) print(a.max(axis = 0)) print(a.max(axis = 1)) ###Output 1.707122417643127 [-0.1141754 1.53973035 1.70712242 0.83702031] [1.53973035 0.53588138 1.70712242] ###Markdown Sorting To return a sorted version of the array without modifying the input, you can use *np.sort* ###Code a = np.random.randn(5) a np.sort(a) a ###Output _____no_output_____ ###Markdown To sort the array in-place, calling the *sort* method on the array ###Code a = np.random.randn(5) a a ###Output _____no_output_____ ###Markdown A related function is *argsort*, which instead returns the indices of the sorted elements: ###Code a = np.random.randn(5) a np.argsort(a) a[np.argsort(a)] np.argmin(a) a a.dtype ###Output _____no_output_____ ###Markdown Sorting along rows or columns A useful feature of NumPy's sorting algorithms is the ability to sort along specific rows or columns of a multidimensional array using the axis argument. For example: ###Code a = np.random.randint(0, 9, (4, 5)) a np.sort(a, axis=0) np.sort(a, axis=1) np.sort(a) ###Output _____no_output_____ ###Markdown Homework 1. Given a 1D array, negate all elements which are between 3 and 8, in place (not created a new array). 2. Create random vector of size 10 and replace the maximum value by 0 3. How to find common values between two arrays? 4. Reverse a vector (first element becomes last) 5. Create a 3x3 matrix with values ranging from 0 to 8 6. Find indices of non-zero elements from the array [1,2,0,0,4,0] 7. Create a 3x3x3 array with random values 8. Create a random vector of size 30 and find the mean value 9. Create a 2d array with 1 on the border and 0 inside 10. Given an array x of 20 integers in the range (0, 100) ###Code x = np.random.randint(0, 100, 20) x ###Output _____no_output_____ ###Markdown and an random float in the range (0, 20) ###Code y = np.random.uniform(0, 20) y ###Output _____no_output_____
notebooks/lgb-baseline-trxn.ipynb
###Markdown Features ###Code target['feature_5_0'] = target.feature_5 < 1e-10 target['feature_5_1'] = target.feature_5 > 1e-10 target['feature_4_0'] = target.feature_4 < 1e-10 target['feature_4_1'] = target.feature_4 > 1e-10 for col in ['feature_7', 'feature_8', 'feature_9', 'feature_10']: target[col] = target[col].fillna(target[col].mode()[0]) client = pd.read_csv(CLIENTS, sep=',') client.loc[client.education.isna(), 'education'] = 'MISSING' client.loc[(client.city > 1000) | (client.city == -1), 'city'] = 1001 client.loc[(client.region > 60) | (client.region == -1), 'region'] = 61 client['gender'] = client['gender'].fillna(value='F') client['age'] = client['age'].fillna(client['age'].mode()[0]) client.head() client = pd.get_dummies(client, columns=['education', 'job_type', 'citizenship', 'region', 'city', 'gender']) target = target.set_index('client_id').sort_index() client = client.set_index('client_id').sort_index() pd_train = target.join(client) pd_train.shape pd_trxn_features = pd.read_csv('trxn_features_2.csv', sep=',', index_col='client_id') pd_trxn_features.shape pd_train['has_trxn_features'] = pd_train.index.isin(set(pd_trxn_features.index)) pd_train = pd_train.join(pd_trxn_features) pd_train = pd_train.fillna(0) pd_train.shape pd_train.head() ###Output _____no_output_____ ###Markdown Train ###Code class KFoldGenerator: def __init__(self, path, df): locs = {v: k for k, v in enumerate(pd_train.index)} folds = [] for i in range(5): with open(os.path.join(path, f'fold_{i}_train.txt'), mode='r') as inp: tr = np.array([*map(int, inp)]) with open(os.path.join(path, f'fold_{i}_test.txt'), mode='r') as inp: te = np.array([*map(int, inp)]) folds.append((tr, te)) folds = [ ([locs[e] for e in fold_train], [locs[e] for e in fold_valid], ) for fold_train, fold_valid in folds ] self.folds = folds def __iter__(self): yield from self.folds kfold = KFoldGenerator(path='../folds/', df=pd_train) # I'd better use :) # from sklearn.model_selection import StratifiedKFold # kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=72) import lightgbm as lgb X_train = lgb.Dataset( data=pd_train.drop(['sale_flg', 'sale_amount', 'contacts', 'region_cd'], axis=1), label=pd_train['sale_flg'].to_numpy(), ) from sklearn.metrics import ( roc_auc_score, precision_recall_fscore_support, accuracy_score, ) params = { 'objective': 'binary', 'metric': 'auc', 'learning_rate': 0.05, 'subsample': 0.7, 'class_weight': 'balanced', 'colsample_bytree': 0.7, 'max_depth': 5, 'num_leaves': 256, } def update_learning_rate(num_rounds): if num_rounds <= 550: return 0.05 return 0.03 %%time trees = 1000 cv = lgb.cv(params, X_train, show_stdv=False, verbose_eval=True, num_boost_round=trees, early_stopping_rounds=50, return_cvbooster=True, folds=kfold) cvbooster = cv.pop('cvbooster', None) cv = pd.DataFrame(cv) cv[10:].plot(figsize=(6, 6), y=['auc-mean']) print(cv.loc[cv['auc-mean'].values.argmax()]) trees = cv['auc-mean'].values.argmax() trees feature_importance = [] for booster in cvbooster.boosters: feature_importance_ = pd.Series( data=booster.feature_importance('split'), index=booster.feature_name(), ) feature_importance.append(feature_importance_) feature_importance = pd.concat(feature_importance, axis=1) feature_importance_mean = feature_importance.median(axis=1).astype(int).rename('mean') feature_importance_std = feature_importance.std(axis=1).rename('std') indices = feature_importance_mean.argsort() feature_importance = feature_importance.iloc[indices] feature_importance_mean = feature_importance_mean[indices] feature_importance_std = feature_importance_std[indices] feature_importance_mean[::-1] feature_importance = [] for booster in cvbooster.boosters: feature_importance_ = pd.Series( data=booster.feature_importance('gain'), index=booster.feature_name(), ) feature_importance.append(feature_importance_) feature_importance = pd.concat(feature_importance, axis=1) feature_importance_mean = feature_importance.mean(axis=1).rename('mean') feature_importance_std = feature_importance.std(axis=1).rename('std') indices = feature_importance_mean.argsort() feature_importance = feature_importance.iloc[indices] feature_importance_mean = feature_importance_mean[indices] feature_importance_std = feature_importance_std[indices] feature_importance_mean[::-1] pd.concat([ feature_importance_mean, feature_importance_std, ], axis=1).iloc[-30:].plot.barh(y='mean', figsize=(10, 15), xerr='std') feature_importance.iloc[-30:].T.boxplot(figsize=(10, 15), vert=False) def eval_metrics(y_true, y_score, earnings, contacts_cnt, thrsh=0.5): auc = roc_auc_score(y_true, y_score) y_pred = y_score > thrsh acc = accuracy_score(y_true, y_pred) pre, rec, f1, _ = precision_recall_fscore_support(y_true, y_pred, average='binary') anic = (y_pred * (earnings - 4000 * contacts_cnt)).mean() return auc, acc, pre, rec, f1, anic submission_pull = [] for (_, valid_idx), booster in zip(kfold, tqdm(cvbooster.boosters)): clients_valid, X_valid, y_valid = ( pd_train.iloc[valid_idx].index, pd_train.loc[:, X_train.get_feature_name()].iloc[valid_idx], pd_train.loc[:, 'sale_flg'].iloc[valid_idx], ) submission = pd.DataFrame(index=clients_valid) submission.index = submission.index.rename('client_id') submission['scores'] = booster.predict(X_valid) submission_pull.append(submission) submission = pd.concat(submission_pull, axis=0) submission.head() submission = submission.join(pd_train[['sale_flg', 'sale_amount', 'contacts']]) submission['sale_amount'] = submission['sale_amount'].fillna(0) submission.head() submission.shape, pd_train.shape thresholds = np.linspace(0, 1, 1000) scores = [ eval_metrics(submission['sale_flg'], submission['scores'], submission['sale_amount'], submission['contacts'], thrsh) for thrsh in tqdm(thresholds, position=0) ] scores = np.asarray(scores) fig, ax = plt.subplots(figsize=(8, 8)) _ = ax.plot(thresholds, scores[:, -1]) _ = ax.grid() thrsh_best = thresholds[np.argmax(scores[:, -1])] metrics_best = eval_metrics( submission['sale_flg'], submission['scores'], submission['sale_amount'], submission['contacts'], thrsh_best, ) print("ROC AUC: {:.6f}\n" "Accuarcy: {:.6f}\n" "Precision: {:.6f}\n" "Recall: {:.6f}\n" "F1-score: {:.6f}\n" "ANIC: {:.6f}\n".format(*metrics_best)) ###Output ROC AUC: 0.986462 Accuarcy: 0.942452 Precision: 0.763632 Recall: 0.934957 F1-score: 0.840654 ANIC: 6331.833694
LogRunsToExperiment.ipynb
###Markdown Log runs to an experiment Types of experimentsThere are two types of experiments in MLflow: _notebook_ and _workspace_. * A notebook experiment is associated with a specific notebook. Databricks creates a notebook experiment by default when a run is started using `mlflow.start_run()` and there is no active experiment.* Workspace experiments are not associated with any notebook, and any notebook can log a run to these experiments by using the experiment name or the experiment ID when initiating a run. This notebook creates a Random Forest model on a simple dataset and uses the MLflow Tracking API to log the model and selected model parameters and metrics. ###Code # Import the dataset from scikit-learn and create the training and test datasets. from sklearn.model_selection import train_test_split from sklearn.datasets import load_diabetes db = load_diabetes() X = db.data y = db.target X_train, X_test, y_train, y_test = train_test_split(X, y) ###Output _____no_output_____ ###Markdown By default, MLflow runs are logged to the notebook experiment, as illustrated in the following code block. ###Code import mlflow import mlflow.sklearn from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error # In this run, neither the experiment_id nor the experiment_name parameter is provided. MLflow automatically creates a notebook experiment and logs runs to it. # Access these runs using the Experiment sidebar. Click Experiment at the upper right of this screen. with mlflow.start_run(): n_estimators = 100 max_depth = 6 max_features = 3 # Create and train model rf = RandomForestRegressor(n_estimators = n_estimators, max_depth = max_depth, max_features = max_features) rf.fit(X_train, y_train) # Make predictions predictions = rf.predict(X_test) # Log parameters mlflow.log_param("num_trees", n_estimators) mlflow.log_param("maxdepth", max_depth) mlflow.log_param("max_feat", max_features) # Log model mlflow.sklearn.log_model(rf, "random-forest-model") # Create metrics mse = mean_squared_error(y_test, predictions) # Log metrics mlflow.log_metric("mse", mse) ###Output _____no_output_____ ###Markdown To log MLflow runs to a workspace experiment, use `mlflow.set_experiment()` as illustrated in the following code block. An alternative is to set the experiment_id parameter in `mlflow.start_run()`; for example, `mlflow.start_run(experiment_id=1234567)`. ###Code # This run uses mlflow.set_experiment() to specify an experiment in the workspace where runs should be logged. # If the experiment specified by experiment_name does not exist in the workspace, MLflow creates it. # Access these runs using the experiment name in the workspace file tree. experiment_name = "/Shared/diabetes_experiment/" mlflow.set_experiment(experiment_name) with mlflow.start_run(): n_estimators = 110 max_depth = 8 max_features = 7 # Create and train model rf = RandomForestRegressor(n_estimators = n_estimators, max_depth = max_depth, max_features = max_features) rf.fit(X_train, y_train) # Make predictions predictions = rf.predict(X_test) # Log parameters mlflow.log_param("num_trees", n_estimators) mlflow.log_param("maxdepth", max_depth) mlflow.log_param("max_feat", max_features) # Log model mlflow.sklearn.log_model(rf, "random-forest-model") # Create metrics mse = mean_squared_error(y_test, predictions) # Log metrics mlflow.log_metric("mse", mse) ###Output _____no_output_____
CaseStudies/12-Convolution.Neural.Network-Face.Recognition/Face_recognition_Project_CV_AIML_Online.ipynb
###Markdown **Face Recognition - VGG16 Transfer Learning**--- **Project Description**In this hands-on project, the goal is to build a face identification model to recognize faces. the objective is to use transfer learning from popular object detection model - VGG16. The steps involved in this project are: - Load the dataset and create the metadata. - Check some samples of metadata. - Load the pre-trained model and weights. - Generate embedding vectors for each face in the dataset. - Build distance metrics for identifying the distance between two given images. - Use PCA for dimentionality reduction. - Build SVM classifier to map each image to its right person. - Predict using the SVM model. Dataset**Aligned Face Dataset from Pinterest**This dataset contains 10,770 images for 100 people. All images are taken from 'Pinterest' and aligned using dlib library. ###Code %tensorflow_version 2.x import tensorflow tensorflow.__version__ ###Output _____no_output_____ ###Markdown Mount Google drive if you are using google colab- We recommend using Google Colab as you can face memory issues and longer runtimes while running on local ###Code from google.colab import drive drive.mount('/content/drive') ###Output Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly Enter your authorization code: ·········· Mounted at /content/drive ###Markdown Change current working directory to project folder (1 mark) ###Code proj_path = "/content/drive/My Drive/Colab Notebooks/DLCP/FaceRecognition-VGG16/" # %cd $proj_path # 1st way import os, sys # 2nd way os.chdir(proj_path) # Add the path to the sys.path for this session sys.path.append(proj_path) ###Output _____no_output_____ ###Markdown Extract the zip file (2 marks)- Extract Aligned Face Dataset from Pinterest.zip ###Code pinterest_images = 'Aligned Face Dataset from Pinterest.zip' import zipfile archive = zipfile.ZipFile(pinterest_images, 'r') # Changing the working directory makes sure it is extracted in our project path # archive.extractall() ###Output _____no_output_____ ###Markdown Function to load images- Define a function to load the images from the extracted folder and map each image with person id ###Code import numpy as np import os class IdentityMetadata(): def __init__(self, base, name, file): # print(base, name, file) # dataset base directory self.base = base # identity name self.name = name # image file name self.file = file def __repr__(self): return self.image_path() def image_path(self): return os.path.join(self.base, self.name, self.file) def load_metadata(path): metadata = [] for i in os.listdir(path): for f in os.listdir(os.path.join(path, i)): # Check file extension. Allow only jpg/jpeg' files. ext = os.path.splitext(f)[1] if ext == '.jpg' or ext == '.jpeg': metadata.append(IdentityMetadata(path, i, f)) return np.array(metadata) # metadata = load_metadata('images') metadata = load_metadata('PINS') ###Output _____no_output_____ ###Markdown Define function to load image- Define a function to load image from the metadata ###Code import cv2 def load_image(path): img = cv2.imread(path, 1) # OpenCV loads images with color channels # in BGR order. So we need to reverse them return img[...,::-1] ###Output _____no_output_____ ###Markdown Load a sample image (2 marks)- Load one image using the function "load_image" ###Code import matplotlib.pyplot as plt plt.imshow(load_image(metadata[np.random.randint(0, 10770)].image_path())) ###Output _____no_output_____ ###Markdown VGG Face model- Here we are giving you the predefined model for VGG face ###Code from tensorflow.keras.models import Sequential from tensorflow.keras.layers import ZeroPadding2D, Convolution2D, MaxPooling2D, Dropout, Flatten, Activation def vgg_face(): model = Sequential() model.add(ZeroPadding2D((1,1),input_shape=(224,224, 3))) model.add(Convolution2D(64, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Convolution2D(4096, (7, 7), activation='relu')) model.add(Dropout(0.5)) model.add(Convolution2D(4096, (1, 1), activation='relu')) model.add(Dropout(0.5)) model.add(Convolution2D(2622, (1, 1))) model.add(Flatten()) # model.add(Activation('relu')) model.add(Activation('softmax')) return model ###Output _____no_output_____ ###Markdown Load the model (2 marks)- Load the model defined above- Then load the given weight file named "vgg_face_weights.h5" ###Code model = vgg_face() model.load_weights('vgg_face_weights.h5') model.summary() ###Output Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= zero_padding2d (ZeroPadding2 (None, 226, 226, 3) 0 _________________________________________________________________ conv2d (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ zero_padding2d_1 (ZeroPaddin (None, 226, 226, 64) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ zero_padding2d_2 (ZeroPaddin (None, 114, 114, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ zero_padding2d_3 (ZeroPaddin (None, 114, 114, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 56, 56, 128) 0 _________________________________________________________________ zero_padding2d_4 (ZeroPaddin (None, 58, 58, 128) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ zero_padding2d_5 (ZeroPaddin (None, 58, 58, 256) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ zero_padding2d_6 (ZeroPaddin (None, 58, 58, 256) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 28, 28, 256) 0 _________________________________________________________________ zero_padding2d_7 (ZeroPaddin (None, 30, 30, 256) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ zero_padding2d_8 (ZeroPaddin (None, 30, 30, 512) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ zero_padding2d_9 (ZeroPaddin (None, 30, 30, 512) 0 _________________________________________________________________ conv2d_9 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 14, 14, 512) 0 _________________________________________________________________ zero_padding2d_10 (ZeroPaddi (None, 16, 16, 512) 0 _________________________________________________________________ conv2d_10 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ zero_padding2d_11 (ZeroPaddi (None, 16, 16, 512) 0 _________________________________________________________________ conv2d_11 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ zero_padding2d_12 (ZeroPaddi (None, 16, 16, 512) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 512) 0 _________________________________________________________________ conv2d_13 (Conv2D) (None, 1, 1, 4096) 102764544 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 4096) 0 _________________________________________________________________ conv2d_14 (Conv2D) (None, 1, 1, 4096) 16781312 _________________________________________________________________ dropout_1 (Dropout) (None, 1, 1, 4096) 0 _________________________________________________________________ conv2d_15 (Conv2D) (None, 1, 1, 2622) 10742334 _________________________________________________________________ flatten (Flatten) (None, 2622) 0 _________________________________________________________________ activation (Activation) (None, 2622) 0 ================================================================= Total params: 145,002,878 Trainable params: 145,002,878 Non-trainable params: 0 _________________________________________________________________ ###Markdown Get vgg_face_descriptor ###Code from tensorflow.keras.models import Model vgg_face_descriptor = Model(inputs=model.layers[0].input, outputs=model.layers[-2].output) ###Output _____no_output_____ ###Markdown Generate embeddings for each image in the dataset- Given below is an example to load the first image in the metadata and get its embedding vector from the pre-trained model. ###Code # Get embedding vector for first image in the metadata using the pre-trained model img_path = metadata[0].image_path() img = load_image(img_path) # Normalising pixel values from [0-255] to [0-1]: scale RGB values to interval [0,1] img = (img / 255.).astype(np.float32) img = cv2.resize(img, dsize = (224,224)) print(img.shape) # Obtain embedding vector for an image # Get the embedding vector for the above image using vgg_face_descriptor model and print the shape embedding_vector = vgg_face_descriptor.predict(np.expand_dims(img, axis=0))[0] print(embedding_vector.shape) ###Output (224, 224, 3) (2622,) ###Markdown Generate embeddings for all images (5 marks)- Write code to iterate through metadata and create embeddings for each image using `vgg_face_descriptor.predict()` and store in a list with name `embeddings`- If there is any error in reading any image in the dataset, fill the emebdding vector of that image with 2622-zeroes as the final embedding from the model is of length 2622. ###Code # Method to generate embeddings for all images def generate_all_embeddings(metadata): # Create an embedding vector of all zeros, then fill it up with actual image embeddings iteratively. embeddings = np.zeros((metadata.shape[0], 2622)) for idx, meta in enumerate(metadata): try: img = load_image(meta.image_path()) # scale RGB values to interval [0,1] img = cv2.resize(img, dsize = (224,224)) img = (img / 255.).astype(np.float32) # obtain embedding vector for image embeddings[idx] = vgg_face_descriptor.predict(np.expand_dims(img, axis=0))[0] except Exception as ex: print('Could not generate embedding s for', meta.image_path(), ' Exception--', str(ex)) return embeddings import pickle embedding_pkl = 'embeddings.pickle' # Generate all embeddings and serialize it in the drive if os.path.isfile(embedding_pkl) and os.path.getsize(embedding_pkl) > 0: embeddings = pickle.load(open(embedding_pkl,"rb")) else: embeddings = generate_all_embeddings(metadata) with open(embedding_pkl, 'wb') as handle: pickle.dump(embeddings, handle, protocol=pickle.HIGHEST_PROTOCOL) ###Output _____no_output_____ ###Markdown Function to calculate distance between given 2 pairs of images.- Consider distance metric as "Squared L2 distance"- Squared l2 distance between 2 points (x1, y1) and (x2, y2) = (x1-x2)^2 + (y1-y2)^2 ###Code def distance(emb1, emb2): return np.sum(np.square(emb1 - emb2)) ###Output _____no_output_____ ###Markdown Plot images and get distance between the pairs given below- 2, 3 and 2, 180- 30, 31 and 30, 100- 70, 72 and 70, 115 ###Code import matplotlib.pyplot as plt def show_pair(idx1, idx2): plt.figure(figsize=(8,3)) plt.suptitle(f'Distance = {distance(embeddings[idx1], embeddings[idx2]):.2f}') plt.subplot(121) plt.imshow(load_image(metadata[idx1].image_path())) plt.subplot(122) plt.imshow(load_image(metadata[idx2].image_path())); show_pair(2, 3) show_pair(2, 180) ###Output _____no_output_____ ###Markdown Create train and test sets (5 marks)- Create X_train, X_test and y_train, y_test- Use train_idx to seperate out training features and labels- Use test_idx to seperate out testing features and labels ###Code train_idx = np.arange(metadata.shape[0]) % 9 != 0 test_idx = np.arange(metadata.shape[0]) % 9 == 0 # one half as train examples of 10 identities X_train = embeddings[train_idx] # another half as test examples of 10 identities X_test = embeddings[test_idx] targets = np.array([m.name for m in metadata]) y_train = targets[train_idx] y_test = targets[test_idx] ###Output _____no_output_____ ###Markdown Encode the Labels (3 marks)- Encode the targets- Use LabelEncoder ###Code from sklearn.preprocessing import LabelEncoder le = LabelEncoder() # Numerical encoding of identities y_train = le.fit_transform(y_train) y_test = le.transform(y_test) ###Output _____no_output_____ ###Markdown Standardize the feature values (3 marks)- Scale the features using StandardScaler ###Code # Standarize features from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) ###Output _____no_output_____ ###Markdown Reduce dimensions using PCA (3 marks)- Reduce feature dimensions using Principal Component Analysis ###Code from sklearn.decomposition import PCA # Create a covariance matrix and calculate eigen values covar_mat = PCA().fit(X_train) # calculate variance ratios var = covar_mat.explained_variance_ratio_;var # cumulative sum of variance explained with [n] features eigen_vals = np.round(covar_mat.explained_variance_ratio_, decimals=3)*100 np.cumsum(eigen_vals) threshold=90 def generate_scree_plot(covar_matrix, threshold): var = covar_matrix.explained_variance_ eigen_vals = np.cumsum(np.round(covar_matrix.explained_variance_ratio_, decimals=3)*100) f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20,7)) f.suptitle('PCA Scree plot') ax1.plot(np.arange(1, len(var)+1), var, '-g') ax1.set_title('Explained Variance') ax1.set_xlabel('# of Components') ax1.set_ylabel('Eigen Values') ax2.plot(np.arange(1, len(eigen_vals)+1), eigen_vals, ':k', marker='o', markerfacecolor='red', markersize=8) ax2.set_xticks(np.arange(1, len(eigen_vals)+1)) ax2.axhline(y=threshold, color='r', linestyle=':', label='Threshold(90%)') ax2.legend() ax2.plot(np.arange(sum(eigen_vals <= threshold) + 1, len(eigen_vals) + 1), [val for val in eigen_vals if val > threshold], '-bo') ax2.set_ylim(bottom=threshold-10, top=95) ax2.set_xlim([150,170]) ax2.set_title('Cumulative sum Explained Variance Ratio') ax2.set_xlabel('# of Components') ax2.set_ylabel('% Variance Explained') generate_scree_plot(covar_mat, threshold=threshold) ###Output _____no_output_____ ###Markdown **Observation**:Though there is no sharp elbow point in the explained variance plot, but its is obvious from the Cumulative sum explained variance ratio plt that there are 163 components explaining more than 90% of variance. Hence considering n_component as 163 for PCA ###Code pca = PCA(n_components=163, svd_solver='randomized', whiten=True) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) ###Output _____no_output_____ ###Markdown Build a Classifier (3 marks)- Use SVM Classifier to predict the person in the given image- Fit the classifier and print the score ###Code from sklearn.svm import SVC clf = SVC(kernel='rbf', class_weight=None , C=10000000, gamma='auto') clf.fit(X_train, y_train) print('Score of the classifier: %.2f%%' % (clf.score(X_test, y_test) * 100)) ###Output Score of the classifier: 96.32% ###Markdown Test results (1 mark)- Take 10th image from test set and plot the image- Report to which person(folder name in dataset) the image belongs to ###Code import warnings # Suppress LabelEncoder warning warnings.filterwarnings('ignore') example_idx = 10 example_image = load_image(metadata[test_idx][example_idx].image_path()) example_prediction = clf.predict([X_test[example_idx]]) example_identity = le.inverse_transform(example_prediction)[0] plt.imshow(example_image) plt.title(f'Identified as {example_identity}'); ###Output _____no_output_____
ML_com_classificacao_parte_6.ipynb
###Markdown Importando dados ###Code import csv def carregar_acessos(): X = [] Y = [] arquivo = open('acesso.csv', 'r') leitor = csv.reader(arquivo) next(leitor) for acessou_home,acessou_como_funciona,acessou_contato, comprou in leitor: dado = [int(acessou_home),int(acessou_como_funciona),int(acessou_contato)] X.append(dado) Y.append(int(comprou)) return X, Y X, Y = carregar_acessos() print(X) print(Y) ###Output [0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0] ###Markdown Testando Modelo ###Code from sklearn.naive_bayes import MultinomialNB modelo = MultinomialNB() modelo.fit(X, Y) print(modelo.predict([[1,0,1]])) print(modelo.predict([[1,0,1],[0,1,0]])) print(modelo.predict([[1,0,1],[0,1,0],[1,0,0]])) print(modelo.predict([[1,0,1],[0,1,0], [1,0,0], [1,1,0]])) print(modelo.predict([[1,0,1],[0,1,0], [1,0,0], [1,1,0], [1,1,1]])) # restante do código resultado = modelo.predict(X) diferencas = resultado - Y acertos = [d for d in diferencas if d == 0] total_de_acertos = len(acertos) total_de_elementos = len(X) taxa_de_acerto = 100.0 * total_de_acertos / total_de_elementos print(taxa_de_acerto) print(total_de_elementos) treino_dados = X[:90] treino_marcacoes = Y[:90] teste_dados = X[-9:] teste_marcacoes = Y[-9:] modelo.fit(treino_dados, treino_marcacoes) resultado = modelo.predict(teste_dados) diferencas = resultado - teste_marcacoes acertos = [d for d in diferencas if d == 0] total_de_acertos = len(acertos) total_de_elementos = len(teste_dados) taxa_de_acerto = 100.0 * total_de_acertos / total_de_elementos print(taxa_de_acerto) print(total_de_elementos) ###Output 88.88888888888889 9
sentiment-verified.ipynb
###Markdown Sentiment assessment of verified users ###Code import datetime import os import re import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import matplotlib.dates as mdates from IPython.display import clear_output DATADIRECTORYALL = "../data/sentiment/ALL-pattern/" DATADIRECTORYRIVM = "../data/sentiment/rivm-pattern/" DATADIRECTORYTEXT = "../data/text/" DATADIRECTORYEMOJI = "../data/sentiment-emoji/" SENTIMENT = "sentiment" COUNT = "count" DATA = "data" LABEL = "label" HIGHLIGHT = "highlight" HIGHLIGHTLABEL = "highlightlabel" FILEPATTERNALL = "2.*z" IDSTR = "id_str" VERIFIED = "verified" def squeal(text=None): clear_output(wait=True) if not text is None: print(text) def getSentimentPerHourVerified(dataDirectory,filePattern=FILEPATTERNALL): fileList = sorted(os.listdir(dataDirectory)) sentimentPerHour = {} for inFileName in fileList: if re.search(filePattern,inFileName): squeal(inFileName) try: df = pd.read_csv(dataDirectory+inFileName,compression="gzip",header=None,index_col=0) dfText = pd.read_csv(DATADIRECTORYTEXT+inFileName,compression="gzip",index_col=IDSTR) dictVerified = {i:df.loc[i] for i in dfText.index if dfText.loc[i][VERIFIED] == 1 and i in df.index} dfVerified = pd.DataFrame.from_dict(dictVerified).T except: continue sentiment = sum(dfVerified[1])/len(dfVerified) hour = inFileName[0:11] sentimentPerHour[hour] = { SENTIMENT:sentiment, COUNT:len(dfVerified) } sentimentPerHour = {key:sentimentPerHour[key] for key in sorted(sentimentPerHour.keys())} return(sentimentPerHour) def makeSentimentPerDay(sentimentPerHour): sentimentPerDay = {} for hour in sentimentPerHour: day = re.sub("..$","12",hour) if not day in sentimentPerDay: sentimentPerDay[day] = {SENTIMENT:0,COUNT:0} sentimentPerDay[day][SENTIMENT] += sentimentPerHour[hour][SENTIMENT]*sentimentPerHour[hour][COUNT] sentimentPerDay[day][COUNT] += sentimentPerHour[hour][COUNT] for day in sentimentPerDay: sentimentPerDay[day][SENTIMENT] /= sentimentPerDay[day][COUNT] return(sentimentPerDay) DATEFORMATHOUR = "%Y%m%d-%H" DATEFORMATMONTH = "%-d/%-m" DATEFORMATHRSMINS = "%H:%M" DEFAULTTITLE = "Sentiment scores of Dutch tweets of verified users" def visualizeSentiment(dataSources,title=DEFAULTTITLE,dateFormat=DATEFORMATMONTH): font = {"size":14} matplotlib.rc("font",**font) fig,ax = plt.subplots(figsize=(12,6)) ax.xaxis.set_major_formatter(mdates.DateFormatter(dateFormat)) for i in range(0,len(dataSources)): data = dataSources[i][DATA] label = dataSources[i][LABEL] lineData= ax.plot_date([datetime.datetime.strptime(key,DATEFORMATHOUR) for key in data],\ [data[key][SENTIMENT] for key in data],xdate=True,fmt="-",label=label) if HIGHLIGHT in dataSources[i]: highlight = dataSources[i][HIGHLIGHT] highlightlabel = dataSources[i][HIGHLIGHTLABEL] color = lineData[-1].get_color() ax.plot_date([datetime.datetime.strptime(key,DATEFORMATHOUR) for key in highlight], [data[key][SENTIMENT] for key in highlight],\ fmt="o",color=color,label=highlightlabel) plt.title(title) plt.legend(framealpha=0.2) plt.show() return(ax) highlight = ["20200301-12","20200309-12",\ "20200312-12","20200315-12","20200317-12","20200319-12","20200323-12","20200331-12","20200407-12",\ "20200415-12","20200421-12","20200429-12","20200506-12","20200513-12","20200519-12","20200527-12"] # ,"20200603-12"] sentimentPerHour = getSentimentPerHourVerified(DATADIRECTORYALL,filePattern="20200[2-5]") sentimentPerDay = makeSentimentPerDay(sentimentPerHour) dummy = visualizeSentiment([{DATA:sentimentPerHour,LABEL:"per hour"}, {DATA:sentimentPerDay,LABEL:"per day",\ HIGHLIGHT:highlight,HIGHLIGHTLABEL:"press conference"}],\ title=DEFAULTTITLE) pd.DataFrame.from_dict(sentimentPerHour).T.to_csv("sentiment-verified.csv",index_label="date") ###Output _____no_output_____
examples/jupyter_notebooks/vseg-onnx.ipynb
###Markdown Semantic Segmentation Inference using ONNX RuntimeIn this example notebook, we describe how to use a pre-trained Semantic Segmentation model for inference using the ONNX Runtime interface. - The user can choose the model (see section titled *Choosing a Pre-Compiled Model*) - The models used in this example were trained on either ***City Scapes*** or ***ADE 20K*** datasets because they are widely used dataset developed for training and benchmarking semantic segmentation AI models. - We perform inference on a few sample images. - We also describe the input preprocessing and output postprocessing steps, demonstrate how to collect various benchmarking statistics and how to visualize the data. Choosing a Pre-Compiled ModelWe provide a set of precompiled artifacts to use with this notebook that will appear as a drop-down list once the first code cell is executed. Semantic SegmentationSemantic Segmentation is a popular computer vision algorithm used in many applications such as Free Space Detection and Lane Detection. The image below shows semantic segmentation results on few sample images. ONNX Runtime based Work flowThe diagram below describes the steps for ONNX Runtime based workflow. Note:- The user needs to compile models(sub-graph creation and quantization) on a PC to generate model artifacts. - For this notebook we use pre-compiled models artifacts- The generated artifacts can then be used to run inference on the target.- Users can run this notebook as-is, only action required is to select a model. ###Code import os import cv2 import numpy as np import ipywidgets as widgets from scripts.utils import get_eval_configs last_artifacts_id = selected_model_id.value if "selected_model_id" in locals() else None prebuilt_configs, selected_model_id = get_eval_configs('segmentation','onnxrt', num_quant_bits = 8, last_artifacts_id = last_artifacts_id) display(selected_model_id) print(f'Selected Model: {selected_model_id.label}') config = prebuilt_configs[selected_model_id.value] config['session'].set_param('model_id', selected_model_id.value) config['session'].start() ###Output _____no_output_____ ###Markdown Define utility function to preprocess input imagesBelow, we define a utility function to preprocess images for the model. This function takes a path as input, loads the image and preprocesses the images as required by the model. The steps below are shown as a reference (no user action required): 1. Load image 2. Convert BGR image to RGB 3. Scale image 4. Apply per-channel pixel scaling and mean subtraction 5. Convert RGB Image to BGR. 6. Convert the image to NCHW format- The input arguments of this utility function is selected automatically by this notebook based on the model selected in the drop-down ###Code def preprocess(image_path, size, mean, scale, layout, reverse_channels): # Step 1 img = cv2.imread(image_path) # Step 2 img = img[:,:,::-1] # Step 3 img = cv2.resize(img, (size[1], size[0]), interpolation=cv2.INTER_CUBIC) # Step 4 img = img.astype('float32') for mean, scale, ch in zip(mean, scale, range(img.shape[2])): img[:,:,ch] = ((img.astype('float32')[:,:,ch] - mean) * scale) # Step 5 if reverse_channels: img = img[:,:,::-1] # Step 6 if layout == 'NCHW': img = np.expand_dims(np.transpose(img, (2,0,1)),axis=0) else: img = np.expand_dims(img,axis=0) return img ###Output _____no_output_____ ###Markdown Create the model using the stored artifactsWarning: It is recommended to use the ONNX Runtime APIs in the cells below without any modifications. ###Code import onnxruntime as rt onnx_model_path = config['session'].get_param('model_file') delegate_options = {} so = rt.SessionOptions() delegate_options['artifacts_folder'] = config['session'].get_param('artifacts_folder') EP_list = ['TIDLExecutionProvider','CPUExecutionProvider'] sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so) input_details = sess.get_inputs() output_details = sess.get_outputs() ###Output _____no_output_____ ###Markdown Run the model for inference Preprocessing and Inference - We perform inference on a set of images from the `/sample-images` directory. - We use a loop to preprocess the selected images, and provide them as the input to the network. Postprocessing and Visualization - Once the inference results are available, we postpocess the results and visualize the inferred classes for each of the input images. - Semantic segmentation models return results as a list (i.e. `numpy.ndarray`) with one element to represent the class ID. - We use the `seg_mask_overlay()` function to postprocess the results. - Then, in this notebook, we use *matplotlib* to plot the original images and the corresponding results. ###Code from scripts.utils import get_preproc_props # use results from the past inferences images = [('sample-images/ADE_val_00001801.jpg', 221), ('sample-images/ti_lindau_I00000.jpg', 222)] size, mean, scale, layout, reverse_channels = get_preproc_props(config) print(f'Image size: {size}') import tqdm import matplotlib.pyplot as plt from PIL import Image from scripts.utils import seg_mask_overlay plt.figure(figsize=(20,10)) for num in tqdm.trange(len(images)): image_file, grid = images[num] img = Image.open(image_file).convert('RGB') ax = plt.subplot(grid) img_in = preprocess(image_file , size, mean, scale, layout, reverse_channels) if not input_details[0].type == 'tensor(float)': img_in = np.uint8(img_in) res = sess.run(None, {input_details[0].name: img_in}) org_size = img.size img = seg_mask_overlay(res, img, layout).resize(org_size) ax.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Plot Inference benchmarking statistics - During model execution several benchmarking statistics such as timestamps at different checkpoints, DDR bandwidth are collected and stored. - The `get_TI_benchmark_data()` function can be used to collect these statistics. The statistics are collected as a dictionary of `annotations` and corresponding markers. - We provide the utility function plot_TI_benchmark_data to visualize these benchmark KPIs.Note: The values represented by Inferences Per Second and Inference Time Per Image uses the total time taken by the inference except the time taken for copying inputs and outputs. In a performance oriented system, these operations can be bypassed by writing the data directly into shared memory and performing on-the-fly input / output normalization. ###Code from scripts.utils import plot_TI_performance_data, plot_TI_DDRBW_data, get_benchmark_output stats = sess.get_TI_benchmark_data() fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,5)) plot_TI_performance_data(stats, axis=ax) plt.show() tt, st, rb, wb = get_benchmark_output(stats) print(f'SoC: J721E/DRA829/TDA4VM') print(f' OPP:') print(f' Cortex-A72 @2GHZ') print(f' DSP C7x-MMA @1GHZ') print(f' DDR @4266 MT/s\n') print(f'{selected_model_id.label} :') print(f' Inferences Per Second : {1000.0/tt :7.2f} fps') print(f' Inference Time Per Image : {tt :7.2f} ms') print(f' DDR usage Per Image : {rb+ wb : 7.2f} MB') ###Output _____no_output_____
Python_scripts/02_MAGICC_probab_run.ipynb
###Markdown Running MAGICC in ParallelThe code in this notebook is a work in progress so it is quite verbose. In future prettier wrappers can be written but for now it's helpful to have things in one place. ###Code import glob import logging import multiprocessing import os.path from concurrent.futures import ProcessPoolExecutor from subprocess import CalledProcessError import f90nml import matplotlib.pyplot as plt import numpy as np from openscm_runner.adapters.magicc7._parallel_process import _parallel_process from scmdata import df_append from _magicc_instances import _MagiccInstances from tqdm.autonotebook import tqdm from matplotlib.lines import Line2D import seaborn as sns logger = logging.getLogger() logger.setLevel(logging.INFO) stderr_info_handler = logging.StreamHandler() formatter = logging.Formatter("%(name)s - %(levelname)s: %(message)s") stderr_info_handler.setFormatter(formatter) logger.addHandler(stderr_info_handler) ###Output _____no_output_____ ###Markdown Config ###Code # how many MAGICC workers to use NWORKERS = 4 # where should MAGICC copies be made MAGICC_ROOT_DIR = os.path.expanduser(os.path.join( "")) MAGICC_ROOT_DIR data_path = '' plots_path = '' # where is the MAGICC executable to copy os.environ["MAGICC_EXECUTABLE_6"] = os.path.expanduser(os.path.join()) os.environ["MAGICC_EXECUTABLE_6"] ###Output _____no_output_____ ###Markdown Parallel setup ###Code shared_manager = multiprocessing.Manager() shared_dict = shared_manager.dict() if not os.path.isdir(MAGICC_ROOT_DIR): os.makedirs(MAGICC_ROOT_DIR) def init_magicc_worker(dict_shared_instances, root_dir): logger.debug("Initialising process %s", multiprocessing.current_process()) logger.debug("Existing instances %s", dict_shared_instances) def _run_func(magicc, cfg): try: scenario = cfg.pop("scenario") res = magicc.run(**cfg) res.set_meta(cfg["run_id"], "run_id") res.set_meta(scenario, "scenario") return res except CalledProcessError as e: # Swallow the exception, but return None logger.debug("magicc run failed: {} (cfg: {})".format(e.stderr, cfg)) return None instances = _MagiccInstances(existing_instances=shared_dict) def _execute_run(cfg, run_func, setup_func): magicc = instances.get(root_dir=MAGICC_ROOT_DIR, init_callback=setup_func) return run_func(magicc, cfg) def make_runs_list(cfgs): """ Turn the configs into a list which can be run in parallel. Assigns ``run_id`` for each run if it's not already there. """ out = [ { "cfg": {**{"run_id": i}, **cfg}, "run_func": _run_func, "setup_func": _setup_func, } for i, cfg in enumerate(cfgs) ] if not all(["scenario" in c["cfg"] for c in out]): raise KeyError("Please include a key 'scenario' in each config") return out ###Output _____no_output_____ ###Markdown Modify general MAGICC setup ###Code def _setup_func(magicc): logger.info( "Setting up MAGICC worker in %s", magicc.root_dir, ) magicc.set_config( # can set config to be used in all runs here e.g. # out_forcing=1 # OUT_CARBONCYCLE = 1, # OUT_FORCING = 1, RF_TOTAL_CONSTANTAFTERYR = 2500, RF_TROPOZ_CONSTANTAFTERYR = 2500, RF_STRATOZ_CONSTANTAFTERYR = 2500, # FILE_TUNINGMODEL_2 = '', #C4MIP_UVIC ) magicc.set_years( # modify start- and endyear endyear=2500 ) ###Output _____no_output_____ ###Markdown Runs First we need to get all our configs as a list of dictionaries, like the below. ###Code # fetch the 600 probabilistic parameter sets from the MAGICC run directory rundir="" rundir_files = os.listdir(rundir) probabilistic_files = [x for x in rundir_files if "MAGTUNE_DRAWNSET_CDF_RogeljIPCCrepresent_" in x ] # one could also load the configs from the probabilistic sets using f90nml # choose scenario scenarios = [""] # load probabilistic sets cfgs = [] for scen in scenarios: for f in probabilistic_files: nml = f90nml.read(rundir+f)["nml_allcfgs"] # add scenario information nml["file_emissionscenario"]=scen nml["scenario"]=scen.replace(".SCEN", "") # append cfgs.append(nml) #cfgs = [ # { # "core_climatesensitivity": cs, # "rf_cloud_albedo_aer_wm2": rfcloud, # "file_emissionscenario": scen, # "scenario": scen.replace(".SCEN", "") # } # for cs, rfcloud in zip( # np.round(np.linspace(2, 6, 50), 2), # np.round(np.linspace(-0.2, -1.5, 50), 2) # ) # for scen in ["RCP26.SCEN", "RCP45.SCEN"] #] #cfgs[:1] runs = make_runs_list(cfgs) #runs[:1] try: pool = ProcessPoolExecutor( max_workers=NWORKERS, initializer=init_magicc_worker, initargs=(shared_dict, MAGICC_ROOT_DIR), ) res_raw = _parallel_process( func=_execute_run, configuration=runs, pool=pool, config_are_kwargs=True, front_serial=2, front_parallel=2, ) res = df_append([r for r in res_raw if r is not None]) finally: instances.cleanup() shared_manager.shutdown() pool.shutdown() temp_world = res.filter(variable="Surface Temperature", region = 'World').process_over("run_id", "median").T rf_world = res.filter(variable="Radiative Forcing", region = 'World').process_over("run_id", "median").T em_world = res.filter(variable="KYOTOGHGS_GWPEMIS", region = 'World').process_over("run_id", "median").T temp_world_17 = res.filter(variable="Surface Temperature", region = 'World').process_over("run_id", operation="quantile", q=0.17).T temp_world_83 = res.filter(variable="Surface Temperature", region = 'World').process_over("run_id", operation="quantile", q=0.83).T rf_world_17 = res.filter(variable="Radiative Forcing", region = 'World').process_over("run_id", operation="quantile", q=0.17).T rf_world_83 = res.filter(variable="Radiative Forcing", region = 'World').process_over("run_id", operation="quantile", q=0.83).T temp_world.to_csv(data_path + 'median_temperatures_csv/' + 'temp_med_NDC5_SCa.csv') temp_world_17.to_csv(data_path + 'quantile_temperatures_csv/' + 'temp_q17_NDC5_SCa.csv') temp_world_83.to_csv(data_path + 'quantile_temperatures_csv/' + 'temp_q83_NDC5_SCa.csv') rf_world.to_csv(data_path + 'median_rf_csv/' + 'rf_med_NDC5_SCa.csv') rf_world_17.to_csv(data_path + 'quantile_rf_csv/' + 'rf_q17_NDC5_SCa.csv') rf_world_83.to_csv(data_path + 'quantile_rf_csv/' + 'rf_q83_NDC5_SCa.csv') em_world.to_csv(data_path + 'em_world.csv') ###Output _____no_output_____
Fundamentals_of_Phython.ipynb
###Markdown Assignment Operators ###Code k = 2 l=3 k+=3 #same as k=k+3 print(k) print(1>>l) ###Output 5 0 ###Markdown Boolean Operators ###Code k=2 l=3 print(k>>2)#shift right twice print(k<<2)#shift left twice ###Output 0 8 ###Markdown Relational Operators ###Code print(v>k) #v=4, k=5 print(v==k) ###Output True False ###Markdown Logical Operators ###Code print(v<k and k==k) print(v<k or k==v) print(not (v<k or k==v)) ###Output False False True ###Markdown Identity Operators ###Code print(v is k) print(v is not k) ###Output False True ###Markdown Fundamentals of Phython Phython Indentation ###Code if 5>2: print("Yes") ###Output Yes ###Markdown Phython Comment ###Code #This is a comment print("Hello, World") ###Output Hello, World ###Markdown Phython variable ###Code x = "sally" a=0 a,b,c=0,1,2 print(x) print(a) print(b) print(c) print(a,b,c) ###Output sally 0 1 2 0 1 2 ###Markdown Casting ###Code d = 4 d= int(4) print(d) ###Output 4 ###Markdown Type () Function ###Code d = 4 d= int(4) print(type(d)) ###Output <class 'int'> ###Markdown Double quotes and single quotes ###Code #y="Ana" #print(y) Y= 'Ana' y= 'Robert' print(y) print(Y) ###Output Robert Ana ###Markdown Multiple Variebles ###Code #x='Tony' k=l=m="four" print(k) print(l) print(m) print(k,l,m) ###Output four four four four four four ###Markdown Output Variable ###Code print('Phython programming is enjoying') h="enjoying" p="Phython Programming is " print("Phython programming is " +h) print(p+""+""+h) ###Output Phython programming is enjoying Phython programming is enjoying Phython Programming is enjoying ###Markdown Arithmetic Operations ###Code print(c+d) #c=2, d=4 print(d-c) print(d*c) print(int(d/c)) print(d%c) print(3/2) #1.50 print(d**c) ###Output 6 2 8 2 0 1.5 16 ###Markdown Assignment Operators ###Code q+=5#same as q=q+5 print(q) #same as q=q+5, q=10+5=15 ###Output 15 ###Markdown Boclean logic ###Code s=10 print(s^2) print(s|2) ###Output 8 10 ###Markdown Comparison Operators ###Code print(s>q) print(s==s) print(q==q) ###Output False True True ###Markdown Logical Operators ###Code s>q and s==s s>q or s==s ###Output _____no_output_____ ###Markdown Python Variables ###Code x = float(1) a, b = 0, -1 a, b, c = "Ccc", "Charry", "Mae" print('This a sample') print(a) print(c) ###Output This a sample Ccc Mae ###Markdown Casting ###Code print(x) ###Output 1.0 ###Markdown Type() Function ###Code y = "Charry" print(type(y)) print(type(x)) ###Output <class 'str'> <class 'float'> ###Markdown Double quotes and Single quotes ###Code h = "Cmmp" v = 4 v = 5 print(h) print(v) print(v+1) ###Output Cmmp 5 6 ###Markdown Multiple Variables ###Code x,y,z="one", "two", "five" print(x) print(y) print(z) print(x,y,z) ###Output one two five one two five ###Markdown One Value to Multiple Variables ###Code x = y = z ="Names" print(x,y,z) ###Output Names Names Names ###Markdown Output Variables ###Code x= "fun" print("Python is " + x) x = "Hello" y = "World" print(x+""+" "+y) ###Output Python is fun Hello World ###Markdown Arithmetic Operations ###Code a = 1 b = 2 c = 4 print(a+b) print(a-b) print(a*c) print(int(c/b)) print(3/b) print(3%b) print(3//b) print(3**4) ###Output 3 -1 4 2 1.5 1 1 81
Post_Group/correlation_graphs.ipynb
###Markdown Import Database and Libraries ###Code import pandas as pd import numpy as np import statsmodels import statsmodels.api as sm import scipy.stats as stats import matplotlib.pyplot as plt # import the csv file with JUST the politicians post postDB = pd.read_csv(r"/content/postDB.csv", engine='python') df_post = pd.DataFrame(data=postDB) df_post ###Output /usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm ###Markdown Trial and Error (To be ignored): ###Code hate_comments = df_post["c_Emo_Neg"] #hate_comments hate_posts = df_post["p_Emo_Neg"] #hate_posts corr = np.corrcoef(hate_posts, hate_comments) plt.plot(corr) print(corr) corr_pearson = hate_posts.corr(hate_comments) plt.plot(corr_pearson) print(corr_pearson) lin_reg = stats.linregress(hate_posts, hate_comments) plt.plot(lin_reg) ###Output _____no_output_____ ###Markdown Useful Part: Negative Emotions ###Code correlation_coeff = np.corrcoef(hate_posts, hate_comments) display(correlation_coeff) plt.style.use('ggplot') plt.scatter(hate_posts, hate_comments) plt.show() hate_corr = postDB.loc[postDB['c_rating']=='hate'] hate_corr correlation_coeff_2 = np.corrcoef(hate_corr["p_Emo_Neg"], hate_corr["c_Emo_Neg"]) display(correlation_coeff_2) plt.style.use('ggplot') plt.scatter(hate_corr["p_Emo_Neg"], hate_corr["c_Emo_Neg"]) plt.show() problematic_corr = postDB.loc[postDB['c_rating']=='problematico'] problematic_corr correlation_coeff_3 = np.corrcoef(problematic_corr["p_Emo_Neg"], problematic_corr["c_Emo_Neg"]) display(correlation_coeff_3) plt.style.use('ggplot') plt.scatter(problematic_corr["p_Emo_Neg"], problematic_corr["c_Emo_Neg"]) plt.show() neg_corr = postDB.loc[postDB['c_rating']=='negativo'] neg_corr correlation_coeff_4 = np.corrcoef(neg_corr["p_Emo_Neg"], neg_corr["c_Emo_Neg"]) display(correlation_coeff_4) plt.style.use('ggplot') plt.scatter(neg_corr["p_Emo_Neg"], neg_corr["c_Emo_Neg"]) plt.show() pos_corr = postDB.loc[postDB['c_rating']=='positivo'] pos_corr correlation_coeff_4 = np.corrcoef(pos_corr["p_Emo_Neg"], pos_corr["c_Emo_Neg"]) display(correlation_coeff_4) plt.style.use('ggplot') plt.scatter(pos_corr["p_Emo_Neg"], pos_corr["c_Emo_Neg"]) plt.show() ###Output _____no_output_____ ###Markdown Positive Emotions ###Code #No rating, general correlation_coeff_5 = np.corrcoef(df_post["c_Emo_Pos"] , df_post["c_Emo_Pos"] ) display(correlation_coeff) plt.style.use('ggplot') plt.scatter(hate_posts, hate_comments) plt.show() #Hate rating hate_corr = postDB.loc[postDB['c_rating']=='hate'] hate_corr correlation_coeff_6 = np.corrcoef(hate_corr["p_Emo_Pos"], hate_corr["c_Emo_Pos"]) display(correlation_coeff_6) plt.style.use('ggplot') plt.scatter(hate_corr["p_Emo_Pos"], hate_corr["c_Emo_Pos"]) plt.show() #Negative Rating neg_corr = postDB.loc[postDB['c_rating']=='negativo'] neg_corr correlation_coeff_7 = np.corrcoef(neg_corr["p_Emo_Pos"], neg_corr["c_Emo_Pos"]) display(correlation_coeff_7) plt.style.use('ggplot') plt.scatter(neg_corr["p_Emo_Pos"], neg_corr["c_Emo_Pos"]) plt.show() #Positive rating pos_corr = postDB.loc[postDB['c_rating']=='positivo'] pos_corr correlation_coeff_8 = np.corrcoef(pos_corr["p_Emo_Pos"], pos_corr["c_Emo_Pos"]) display(correlation_coeff_8) plt.style.use('ggplot') plt.scatter(pos_corr["p_Emo_Pos"], pos_corr["c_Emo_Pos"]) plt.show() ###Output _____no_output_____
azure_comms.ipynb
###Markdown Send an SMS with Python in a Jupyter NotebookTODO: Finish up the header docs ###Code #We will install the dependencies delineated in requirements.txt !pip install -r requirements.txt ###Output _____no_output_____ ###Markdown Setup your Azure Communication ResourceIn the cell below we'll setup your ACS resource for you. Just specify your - Subscription ID Guid- Desired Azure Resource Group name- Desired Azure Communications Resource Name ###Code RESOURCE_GROUP="<RESOURCE_GROUP_NAME>" SUBSCRIPTION_ID="<SUBSCRIPTION_ID_GUID>" AZCOMMS_RESOURCE="<ACS_RESOURCE_NAME>" #Because `az communication` is an extension, we can avoid errors with dynamic install !az config set extension.use_dynamic_install=yes_without_prompt #Let's create some Azure resources !az group create --name {RESOURCE_GROUP} --location westus2 --subscription={SUBSCRIPTION_ID} !az communication create --name {AZCOMMS_RESOURCE} --location "Global" --data-location "United States" --resource-group {RESOURCE_GROUP} !az communication list --resource-group {RESOURCE_GROUP} !echo "Your connection string is: " !az communication list-key --name {AZCOMMS_RESOURCE} --resource-group {RESOURCE_GROUP} --subscription {SUBSCRIPTION_ID} --query "primaryConnectionString" --only-show-errors ###Output _____no_output_____ ###Markdown Find your Connection String and Phone NumberIn in order to leverage the ACS SMS service, you'll need to find your connection string. You can do that by going to the [Azure Portal](https://portal.azure.com) and navigating to the service that we created in the script above. ###Code #Grab the connection string from the Azure CLI. #TODO: Fix this wonky way of grabbing passing the string to the Jupyter Kernel CONNECTION_STRING_QUERY = !az communication list-key --name {AZCOMMS_RESOURCE} --resource-group {RESOURCE_GROUP} --subscription {SUBSCRIPTION_ID} --query "primaryConnectionString" --only-show-errors CONNECTION_STRING = "" if CONNECTION_STRING_QUERY: if CONNECTION_STRING_QUERY[0]: CONNECTION_STRING = CONNECTION_STRING_QUERY[0] import os import json from azure.communication.sms import SmsClient #Navigate to the Azure Portal and Look at the Keys section to grab your connection string CONNECTION_STRING="""<ACS_CONNECTION_STRING>""" try: sms_client = SmsClient.from_connection_string(CONNECTION_STRING) from_phone_number="<FROM_PHONE_NUMBER>" to_phone_number="<TO_PHONE_NUMBER>" # Call send() with sms values sms_responses = sms_client.send( from_=from_phone_number, to=to_phone_number, #Grab your ACS phone number in the Azure Portal > ACS Resource > Tools > Phone Number message="Hello world from Azure Communications Service ❤️", enable_delivery_report=True, # optional property tag="hello-acs") # optional property if sms_responses: # Print and make sms_responses serializable print(json.dumps( sms_responses, indent=4, default=lambda x: x.__dict__ )) except Exception as ex: print('Exception:') print(ex) ###Output _____no_output_____
notebooks/003-chinatown-citation-analysis.ipynb
###Markdown An analysis example: How many citations were there of each type in Chinatown? It's all very easy with lovelyrita. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from shapely.geometry import Polygon import pandas as pd import geopandas as gpd import folium from lovelyrita.data import read_data, column_map, to_geodataframe, write_shapefile from lovelyrita.clean import get_datetime, clean, impute_missing_times column_map['[latitude]'] = 'latitude' column_map['[longitude]'] = 'longitude' plt.style.use('seaborn') ###Output _____no_output_____ ###Markdown Create neighborhood shape ###Code # hand draw Chinatown boundary geometry = Polygon([[-122.272536, 37.802353], [-122.274339, 37.799497], [-122.269414, 37.797573], [-122.267536, 37.800388]]) neighborhood = gpd.GeoDataFrame({'geometry': [geometry,], 'name': ['Chinatown',]}, crs={'init' :'epsg:4326'}) ###Output _____no_output_____ ###Markdown Load citations ###Code data_paths = ["/data/lovely-rita/new/2012complete-output.csv", "/data/lovely-rita/new/2013complete-output.csv", "/data/lovely-rita/new/2014complete-output.csv", "/data/lovely-rita/new/2015complete-output.csv", "/data/lovely-rita/new/2016complete-output-2.csv" ] citations = read_data(data_paths, column_map=column_map, clean=True) citations = to_geodataframe(citations) ###Output _____no_output_____ ###Markdown Select Chinatown citations ###Code neighb = neighborhood.geometry.iloc[0] selected_indices = [neighb.contains(c) for c in citations.geometry.values] selected_citations = citations.iloc[selected_indices] # show neighborhood boundary on map map = folium.Map([neighb.centroid.y, neighb.centroid.x], zoom_start=16) map.choropleth(neighborhood.to_crs({'init': 'epsg:4326'}).to_json(), fill_opacity=0.1, line_weight=3) map order = selected_citations.groupby('violation_desc_long').street.count().sort_values(ascending=False).index for year in [2012, 2013, 2014, 2015, 2016]: year_index = pd.to_datetime(selected_citations.ticket_issue_datetime).dt.year == year year_citations = selected_citations.loc[year_index] counts = year_citations.groupby('violation_desc_long').street.count() counts = counts[order] fig, ax = plt.subplots(figsize=(10, 6)) ax = counts.plot(kind='bar', title='Citations by type in Chinatown ({})'.format(year), ylim=[0, 900]) _ = ax.set_xlabel('Citation description') _ = ax.set_ylabel('Number of citations') fig.subplots_adjust(top=0.95, bottom=0.35) fig.savefig('/data/lovely-rita/figures/chinatown_citations_{}.png'.format(year)) ###Output _____no_output_____ ###Markdown Save to a shapefile ###Code write_shapefile(selected_citations, 'chinatown-citations.shp') ###Output _____no_output_____
Project/project_MalJPEG.ipynb
###Markdown Task -Project MalJPEG Imports ###Code # Imports import numpy as np # Support for large arrays and matrices, along with high-level mathematical functions. import seaborn as sns # Graphing/Plotting module. import pandas as pd # CSV handling with operations on tabular data. import lightgbm as lgb from xgboost import XGBClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix, accuracy_score, roc_auc_score from sklearn.preprocessing import LabelBinarizer, StandardScaler ###Output _____no_output_____ ###Markdown Read Data and Preprocess it to fit into DeepMAL model ###Code # Read Data dataset_type = 'markers_image.csv' # other options: 'markers_image.csv' or 'metadata.csv' filepath = f'./datasets/{dataset_type}' df = pd.read_csv(filepath) ###Output _____no_output_____ ###Markdown Preprocess the data ###Code label_type = 'label' # options: 'label' X = np.stack([ df['Marker_EOI_content_after_num'], df['File_markers_num'], df['File_size'], df['Marker_APP1_size_max'], df['Marker_APP12_size_max'], df['Marker_COM_size_max'], df['Marker_DHT_num'], df['Marker_DHT_size_max'], df['Marker_DQT_num'], df['Marker_DQT_size_max'] ]).T y = np.stack(df[label_type]) scaler_mix = StandardScaler() scaler_mix.fit(X) X = scaler_mix.transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1666, stratify=y) ###Output _____no_output_____ ###Markdown Train/Fit ###Code # LGBM classifier lgb_clf = lgb.LGBMClassifier(n_estimators=500, n_jobs=4, random_state=2021, boosting_type='gbdt', class_weight=None, colsample_bytree=1.0, importance_type='split', learning_rate=0.1, max_depth=-1, min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0, num_leaves=31, objective=None, reg_alpha=0.0, reg_lambda=0.0, subsample=1.0, subsample_for_bin=200000, subsample_freq=0) lgb_clf.fit(X_train, y_train) # Decision Tree classifier dtc_clf = DecisionTreeClassifier(max_depth=30, random_state=1914) dtc_clf.fit(X_train, y_train) # Random Forest classifier rfc_clf = RandomForestClassifier(n_estimators=500, max_depth=30, n_jobs=4, random_state=2021) rfc_clf.fit(X_train, y_train) # XGB classifier xgb_clf = XGBClassifier(n_estimators=500, max_depth=30, n_jobs=4, eval_metric='mlogloss', random_state=1941, use_label_encoder=False) xgb_clf.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Test/Predict ###Code lgb_predictions = lgb_clf.predict(X_test) dtc_predictions = dtc_clf.predict(X_test) rfc_predictions = rfc_clf.predict(X_test) xgb_predictions = xgb_clf.predict(X_test) true_labels = y_test cf_matrix = confusion_matrix(true_labels, lgb_predictions) accuracy = accuracy_score(true_labels, lgb_predictions) print("accuracy score: {0:.2f}%".format(accuracy*100)) print("TPR: {0:.3f}%".format(cf_matrix[1][1]/(cf_matrix[0][1]+cf_matrix[1][1]))) print("FPR: {0:.3f}%".format(cf_matrix[0][1]/(cf_matrix[0][0]+cf_matrix[1][0]))) print("AUC&ROC", roc_auc_score(true_labels, lgb_predictions)) print(classification_report(true_labels, lgb_predictions)) heatmap = sns.heatmap(cf_matrix, annot=True, cmap='Blues', fmt='g', xticklabels=np.unique(true_labels), yticklabels=np.unique(true_labels)) cf_matrix = confusion_matrix(true_labels, dtc_predictions) accuracy = accuracy_score(true_labels, dtc_predictions) print("accuracy score: {0:.2f}%".format(accuracy*100)) print("TPR: {0:.3f}%".format(cf_matrix[1][1]/(cf_matrix[0][1]+cf_matrix[1][1]))) print("FPR: {0:.3f}%".format(cf_matrix[0][1]/(cf_matrix[0][0]+cf_matrix[1][0]))) print("AUC&ROC", roc_auc_score(true_labels, dtc_predictions)) print(classification_report(true_labels, dtc_predictions)) heatmap = sns.heatmap(cf_matrix, annot=True, cmap='Blues', fmt='g', xticklabels=np.unique(true_labels), yticklabels=np.unique(true_labels)) cf_matrix = confusion_matrix(true_labels, rfc_predictions) accuracy = accuracy_score(true_labels, rfc_predictions) print("accuracy score: {0:.2f}%".format(accuracy*100)) print("TPR: {0:.3f}%".format(cf_matrix[1][1]/(cf_matrix[0][1]+cf_matrix[1][1]))) print("FPR: {0:.3f}%".format(cf_matrix[0][1]/(cf_matrix[0][0]+cf_matrix[1][0]))) print("AUC&ROC", roc_auc_score(true_labels, rfc_predictions)) print(classification_report(true_labels, rfc_predictions)) heatmap = sns.heatmap(cf_matrix, annot=True, cmap='Blues', fmt='g', xticklabels=np.unique(true_labels), yticklabels=np.unique(true_labels)) cf_matrix = confusion_matrix(true_labels, xgb_predictions) accuracy = accuracy_score(true_labels, xgb_predictions) print("accuracy score: {0:.2f}%".format(accuracy*100)) print("TPR: {0:.3f}%".format(cf_matrix[1][1]/(cf_matrix[0][1]+cf_matrix[1][1]))) print("FPR: {0:.3f}%".format(cf_matrix[0][1]/(cf_matrix[0][0]+cf_matrix[1][0]))) print("AUC&ROC", roc_auc_score(true_labels, xgb_predictions)) print(classification_report(true_labels, xgb_predictions)) heatmap = sns.heatmap(cf_matrix, annot=True, cmap='Blues', fmt='g', xticklabels=np.unique(true_labels), yticklabels=np.unique(true_labels)) print(lgb_clf.feature_importances_) print(dtc_clf.feature_importances_) print(rfc_clf.feature_importances_) print(xgb_clf.feature_importances_) ###Output [0.8606652 0.00681743 0.0038633 0.02452504 0.00676591 0.03024476 0.02275542 0.03322048 0.00527386 0.00586861]
nbs/00_utils.logger.ipynb
###Markdown Logger> Setups `logger`: name, level, format etc. ###Code # export import functools import logging import sys from fastcore.all import ifnone from pytorch_lightning.utilities import rank_zero_only from termcolor import colored # export class _ColorfulFormatter(logging.Formatter): def __init__(self, *args, **kwargs): self._root_name = kwargs.pop("root_name") + "." self._abbrev_name = kwargs.pop("abbrev_name", "") if len(self._abbrev_name): self._abbrev_name = self._abbrev_name + "." super(_ColorfulFormatter, self).__init__(*args, **kwargs) def formatMessage(self, record): record.name = record.name.replace(self._root_name, self._abbrev_name) log = super(_ColorfulFormatter, self).formatMessage(record) if record.levelno == logging.WARNING: prefix = colored("WARNING", "red", attrs=["blink"]) elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL: prefix = colored("ERROR", "red", attrs=["blink", "underline"]) else: return log return prefix + " " + log # export @functools.lru_cache() # so that calling setup_logger multiple times won't add many handlers def setup_logger(distributed_rank=0, *, color=True, name="gale", level=logging.DEBUG): """ Initialize the gale logger and set its verbosity level to `level`. """ logger = logging.getLogger(name) logger.setLevel(level) logger.propagate = False abbrev_name = name plain_formatter = logging.Formatter( "[%(asctime)s] %(name)s %(levelname)s: %(message)s", datefmt="%m/%d %H:%M:%S" ) # stdout logging: master only if distributed_rank == 0: ch = logging.StreamHandler(stream=sys.stdout) ch.setLevel(logging.DEBUG) if color: formatter = _ColorfulFormatter( colored("[%(asctime)s %(name)s]: ", "green") + "%(message)s", datefmt="%m/%d %H:%M:%S", root_name=name, abbrev_name=str(abbrev_name), ) else: formatter = plain_formatter ch.setFormatter(formatter) logger.addHandler(ch) return logger setup_logger() logger = logging.getLogger("gale.utils.logger") logger.info("This is a INFO message") logger.debug("This is a DEBUG message") logger.warning("This is a WARNING message") logger.error("This is a ERROR message") # export @rank_zero_only def log_main_process(logger, lvl, msg): """ Logs `msg` using `logger` only on the main process """ logger.log(lvl, msg) log_main_process(logger, logging.INFO, "This logs only on the main process") log_main_process(logger, logging.ERROR, "This logs only on the main process") log_main_process(logger, logging.WARNING, "This logs only on the main process") ###Output [04/30 16:16:52 gale.utils.logger]: This logs only on the main process ERROR [04/30 16:16:52 gale.utils.logger]: This logs only on the main process WARNING [04/30 16:16:52 gale.utils.logger]: This logs only on the main process ###Markdown Export - ###Code # hide notebook2script("00_utils.logger.ipynb") ###Output Converted 00_utils.logger.ipynb.
myexamples/pylab/order_of_mag.ipynb
###Markdown $$ t_{despin,p} = P_{orb} \left( \frac{a}{R_p} \right)^\frac{9}{2} \left( \frac{M_p }{M_s} \right)^\frac{3}{2}\left( \frac{\mu Q}{e_{g,p}}\right) \sqrt{\frac{M_p + M_s}{M_s}}$$$$ t_{despin,s} = P_{orb} \left( \frac{a}{R_s} \right)^\frac{9}{2} \left( \frac{M_s }{M_p} \right)^\frac{3}{2}\left( \frac{\mu Q}{e_{g,s}}\right) \sqrt{\frac{M_p + M_s}{M_p}}$$ ###Code #for primary, time to spin down by tides def t_despin_1(): aratio = (a_o/R_1)**4.5 mratio = (M_1/M_2)**1.5 z = np.sqrt((M_1+M_2)/M_2) *(muQ/eg_1)*aratio*mratio*P_orbit print('t_dspin_1 ={:.1e} yr'.format(z/year)) return z td_1 = t_despin_1() #for secondary, time to spin down by tides def t_despin_2(): aratio = (a_o/R_2)**4.5 mratio = (M_2/M_1)**1.5 z = np.sqrt((M_1+M_2)/M_1) *(muQ/eg_2)*aratio*mratio*P_orbit print('t_dspin_2 ={:.1e} yr'.format(z/year)) return z td_2 = t_despin_2() print('ratio {:.1f}'.format( td_1/td_2)) ###Output t_dspin_1 =2.4e+11 yr t_dspin_2 =5.3e+08 yr ratio 464.2 ###Markdown $$\dot a_{tides} = 0.1 \left( \frac{R_p}{a} \right)^5 \left( \frac{e_{g,p}}{\mu Q} \right) \left(\frac{M_s}{M_p} \right)n a $$$$t_{a,tides} = \frac{a}{\dot a_{tides}} $$ ###Code # orbital semi-major axis drift due to tides def da_dt_tides(): aratio = (R_1/a_o)**5 na = n_o*a_o z = 0.1* (eg_1/muQ) * (M_2/M_1)* aratio*na # this is da/dt print('da/dt tides = {:.1e} cm/s'.format(z*100)) print('da/dt tides = {:.1e} m/s'.format(z)) t_a = a_o/z print('t_a,tides = {:.1e} yr'.format(t_a/year)) da_dt_tides() ###Output da/dt tides = 5.6e-13 cm/s da/dt tides = 5.6e-15 m/s t_a,tides = 6.7e+09 yr ###Markdown $$ \dot a_{BYORP} = \frac{3}{ 2 \pi} \left( \frac{M_s}{M_p}\right)^{-\frac{1}{3}}\frac{H_0 B}{\omega_{breakup} \rho_p R_p^2} a^\frac{3}{2} $$ ###Code BB = 1e-3 #BYORP coefficient # BYORP semi-major axis drift rate def a_BY(): mratio = M_2/M_1 z = 3.0/(2*np.pi)* H0*BB*a_o**1.5/(w_d*rho_1*R_1**2) * mratio**(-1.0/3.0) print('da/dt BYORP {:.2f} m/s'.format(z)) return z da_dt_BY =a_BY() ###Output da/dt BYORP 0.50 m/s ###Markdown $$ \dot \omega_{YORP} = \frac{F_\odot}{a_\odot^2} \frac{Y}{2 \pi \rho_p R_p^2} $$ ###Code Y=0.01 # YORP coeff # YORP spin up rate def dom_YORP(): dom = (Fnot/anot**2)*Y/(2*np.pi*rho_1*R_1**2) print('domdt YORP {:.2e} rad/s'.format(dom) ) return dom # YORP spin up timescale def t_YORP(): dom= dom_YORP() t_Y = om_breakup/dom print('t_YORP {:.2e} yr'.format(t_Y/year)) t_YORP() #dom = dom_YORP() from pyquaternion import Quaternion ###Output _____no_output_____
LearnPython.ipynb
###Markdown Learn Programming by Computational ThinkingCoding is always the final part!Computational thinking is problems solving progress based on concepts from Computer Science.And then we get to algorithms which are what we need to communicate to computer so that we can use it to solve the problems. Finally, we got an understanding of what the computer is capable of doing, and we use that to develop a more structured way of expressing the algorithm and then we called that pseudo code.Programming is just the end of the computational thinking process. It's the act of expressing an algorithm using a syntax that the computer can understand. No matter what programming language you use, everything you've seen up until now about computational thinking, algorithms, and computer hardware, will stay the same. ###Code # print(4 ** 0.5) # py_list = [ 1, 2, 3] # print(py_list.remove(4)) ###Output _____no_output_____
notebooks/PotableWater.ipynb
###Markdown Potable Water Volumes ###Code %load_ext autoreload %autoreload 2 %run relativepath.py %run commonimports.py %run displayoptions.py potable_water_dataset = StatscanZip('https://www150.statcan.gc.ca/n1/en/tbl/csv/38100092-eng.zip?st=yQYQGvmD') potable_water = potable_water_dataset.get_data() potable_water_dataset.get_data(wide=False) potable_water.head() potable_column_subset = potable_water[['GEO', 'All source water types']] ###Output _____no_output_____ ###Markdown By Province ###Code potable_latest = potable_column_subset[potable_water.index == 2015] potable_latest water_by_province = potable_latest[~(potable_latest.GEO.str.match('.* region|Canada'))]\ .rename(columns={'All source water types':'Volume'}).sort_values('Volume', ascending=True).set_index('GEO') water_by_province import matplotlib.style as style style.use('fivethirtyeight') %matplotlib inline ax = water_by_province.Volume.plot.barh( figsize=(8,7), grid=False) ax.grid(False, axis='y') ax.set_title(label='Processed Drinking Water by Province 2015', fontsize=18) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=14) ax.set_xlabel('Million Cubic Meters', fontsize=16, fontweight='bold') ax.set_ylabel('') ax.text(s='2.7', x=30, y=-.2, size=8,color='darkblue') ax.text(s='5.4', x=30, y=.8, size=8, color='darkblue') ax.text(s='7.1', x=30, y=1.8, size=8, color='darkblue') ax.text(s='11.3', x=30, y=2.8, size=8, color='darkblue') ax.text(s='Source: Statscan', x=1400, y=.2, size=8) ax.get_ylim() ###Output _____no_output_____ ###Markdown Population ###Code population = pd.read_csv('../data/Provinces.csv') population.plot() ###Output _____no_output_____
Re/SLP-Ch2.ipynb
###Markdown Regular Expressions Basic Regular Expression Patterns ###Code # neither S nor s re.compile(r'[^Ss]+').findall("Some strings.") # not a period re.compile(r'[^\.]+').findall("work") # Either e or ^ re.compile(r'[e^]+').findall("egg, ^") # a^b re.compile(r'[a^b]+').findall("hello, a^b.") # optional elements re.compile(r'colou?r').findall("color, colour") # an integer re.compile(r'[0-9][0-9]*').findall("2") # an integer re.compile(r'[0-9]+').findall("2") # any single character re.compile(r'beg.n').findall("begin, beg'n, begun") # begin and end # \. means . is a period not athe wildcard re.compile(r'dog\.$').findall("the dog.") # boundary re.compile(r'\bthe\b').findall("other, the, $they") ###Output _____no_output_____ ###Markdown Disjunction, Grouping, and Precedence ###Code # disjunction re.compile(r'cat|dog').findall("there are a cat and a dog.") # precedence re.compile(r'gupp(y|ies)').findall("guppy and guppies.") re.compile(r'Column [0-9]+ *').findall("Column 1 Column 2 Column 3 Column 4.") # () as a whole re.compile(r'(Column [0-9]+ *)*').findall("Column 1 Column 2 Column 3 Column 4.") # counters have a higher precedence than sequences, cannot match "theny" re.compile(r'the|any').findall("the, any, theny") ###Output _____no_output_____ ###Markdown A Simple Example ###Code # mathch word "the" print(re.compile(r'the').findall("the, The, the_, the25")) print(re.compile(r'[tT]he').findall("the, their, the_, the25")) print(re.compile(r'\b[tT]he\b').findall("the, their, the_, the25")) print(re.compile(r'[^a-zA-Z][tT]he[^a-zA-Z]').findall("the, their, the_, the25")) print(re.compile(r'(^|[^a-zA-Z])[tT]he([^a-zA-Z]|$)').findall("the, their, the_, the25")) ###Output ['the', 'the', 'the'] ['the', 'the', 'the', 'the'] ['the'] [' the_', ' the2'] [('', ','), (' ', '_'), (' ', '2')] ###Markdown A More Complex Example ###Code p = re.compile(r'\$[0-9]{0,3}(\.[0-9]+)?\b') for item in p.finditer("$199.9, one is $199.99. and the other is $199. the last is $1999999.99 . a$199.1"): print(item) p = re.compile(r'(^|\W)\$[0-9]{0,3}(\.[0-9]+)?\b') for item in p.finditer("$199.9, one is $199.99. and the other is $199. the last is $1999999.99 . a$199.1"): print(item) ###Output <_sre.SRE_Match object; span=(0, 6), match='$199.9'> <_sre.SRE_Match object; span=(14, 22), match=' $199.99'> <_sre.SRE_Match object; span=(40, 45), match=' $199'> <_sre.SRE_Match object; span=(58, 60), match=' $'> ###Markdown Regular Expression Substitution, Capture Groups, and ELIZA ###Code for item in re.compile(r'the (.*)er they were, the \1er they will be').finditer( "the bigger they were, the bigger they will be but not the bigger they were, the faster they will be."): print(item) for item in re.compile(r'the (.*)er they (.*), the \1er we \2').finditer( "the faster they ran, the faster we ran but not the faster they ran, the faster we ate."): print(item) # non-capturing for item in re.compile(r'(?:some|a few) (people|cats) like some \1').finditer( "a few cats like some cats but not some cats like some a few."): print(item) print(re.match("([abc])+", "abc").group()) print(re.match("(?:[abc])+", "abc").group()) print(re.match("([abc])+", "abc").groups()) print(re.match("(?:[abc])+", "abc").groups()) ###Output ('c',) () ###Markdown Lookahead assertions ###Code # 前向 test = re.compile(r'^(?=Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) test = re.compile(r'^(?!Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) # 后向 test = re.compile(r'^(?<=Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) test = re.compile(r'^(?<!Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) ###Output [] ['Volcano'] ###Markdown Regular Expressions Basic Regular Expression Patterns ###Code # neither S nor s re.compile(r'[^Ss]+').findall("Some strings.") # not a period re.compile(r'[^\.]+').findall("work") # Either e or ^ re.compile(r'[e^]+').findall("egg, ^") # a^b re.compile(r'[a^b]+').findall("hello, a^b.") # optional elements re.compile(r'colou?r').findall("color, colour") # an integer re.compile(r'[0-9][0-9]*').findall("2") # an integer re.compile(r'[0-9]+').findall("2") # any single character re.compile(r'beg.n').findall("begin, beg'n, begun") # begin and end # \. means . is a period not athe wildcard re.compile(r'dog\.$').findall("the dog.") # boundary re.compile(r'\bthe\b').findall("other, the, $they") ###Output _____no_output_____ ###Markdown Disjunction, Grouping, and Precedence ###Code # disjunction re.compile(r'cat|dog').findall("there are a cat and a dog.") # precedence re.compile(r'gupp(y|ies)').findall("guppy and guppies.") re.compile(r'Column [0-9]+ *').findall("Column 1 Column 2 Column 3 Column 4.") # () as a whole re.compile(r'(Column [0-9]+ *)*').findall("Column 1 Column 2 Column 3 Column 4.") # counters have a higher precedence than sequences, cannot match "theny" re.compile(r'the|any').findall("the, any, theny") ###Output _____no_output_____ ###Markdown A Simple Example ###Code # mathch word "the" print(re.compile(r'the').findall("the, The, the_, the25")) print(re.compile(r'[tT]he').findall("the, their, the_, the25")) print(re.compile(r'\b[tT]he\b').findall("the, their, the_, the25")) print(re.compile(r'[^a-zA-Z][tT]he[^a-zA-Z]').findall("the, their, the_, the25")) print(re.compile(r'(^|[^a-zA-Z])[tT]he([^a-zA-Z]|$)').findall("the, their, the_, the25")) ###Output ['the', 'the', 'the'] ['the', 'the', 'the', 'the'] ['the'] [' the_', ' the2'] [('', ','), (' ', '_'), (' ', '2')] ###Markdown A More Complex Example ###Code p = re.compile(r'\$[0-9]{0,3}(\.[0-9]+)?\b') for item in p.finditer("$199.9, one is $199.99. and the other is $199. the last is $1999999.99 . a$199.1"): print(item) p = re.compile(r'(^|\W)\$[0-9]{0,3}(\.[0-9]+)?\b') for item in p.finditer("$199.9, one is $199.99. and the other is $199. the last is $1999999.99 . a$199.1"): print(item) ###Output <_sre.SRE_Match object; span=(0, 6), match='$199.9'> <_sre.SRE_Match object; span=(14, 22), match=' $199.99'> <_sre.SRE_Match object; span=(40, 45), match=' $199'> <_sre.SRE_Match object; span=(58, 60), match=' $'> ###Markdown Regular Expression Substitution, Capture Groups, and ELIZA ###Code for item in re.compile(r'the (.*)er they were, the \1er they will be').finditer( "the bigger they were, the bigger they will be but not the bigger they were, the faster they will be."): print(item) for item in re.compile(r'the (.*)er they (.*), the \1er we \2').finditer( "the faster they ran, the faster we ran but not the faster they ran, the faster we ate."): print(item) # non-capturing for item in re.compile(r'(?:some|a few) (people|cats) like some \1').finditer( "a few cats like some cats but not some cats like some a few."): print(item) print(re.match("([abc])+", "abc").group()) print(re.match("(?:[abc])+", "abc").group()) print(re.match("([abc])+", "abc").groups()) print(re.match("(?:[abc])+", "abc").groups()) ###Output ('c',) () ###Markdown Lookahead assertions ###Code # 前向 test = re.compile(r'^(?=Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) test = re.compile(r'^(?!Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) # 后向 test = re.compile(r'^(?<=Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) test = re.compile(r'^(?<!Volcano)[a-zA-Z]+') print(test.findall("Volcano I")) ###Output [] ['Volcano']
alg-KNN.ipynb
###Markdown SINGLE EXECUTION Applying in baseScaled ###Code Y = basePre['target'] x_train, x_test, y_train, y_test = train_test_split(baseScaled, Y, test_size=0.30, random_state=0) clf = knn(n_neighbors=n) clf.fit(x_train, y_train) sc = cross_val_score(clf, baseScaled, Y, cv=cv) accArray = np.array([[sc.mean(), sc.std()*2]]) ###Output _____no_output_____ ###Markdown Applying in basePCAInversa ###Code x_train, x_test, y_train, y_test = train_test_split(basePCAInversa, Y, test_size=0.30, random_state=0) clf = knn(n_neighbors=n) clf.fit(x_train, y_train) sc = cross_val_score(clf, basePCAInversa, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) ###Output _____no_output_____ ###Markdown Applying in basePCAProporcional ###Code x_train, x_test, y_train, y_test = train_test_split(basePCAProporcional, Y, test_size=0.30, random_state=0) clf = knn(n_neighbors=n) clf.fit(x_train, y_train) sc = cross_val_score(clf, basePCAProporcional, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) ###Output _____no_output_____ ###Markdown PCA com 70% ###Code x_train, x_test, y_train, y_test = train_test_split(basePca70, Y, test_size=0.30, random_state=0) clf = knn(n_neighbors=n) clf.fit(x_train, y_train) sc = cross_val_score(clf, basePca70, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) ###Output _____no_output_____ ###Markdown PCA com 50% ###Code x_train, x_test, y_train, y_test = train_test_split(basePca50, Y, test_size=0.30, random_state=0) clf = knn(n_neighbors=n) clf.fit(x_train, y_train) sc = cross_val_score(clf, basePca50, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) dfAcc = pd.DataFrame(accArray, columns=['mean', 'std'], index=None) dfAcc = (dfAcc*100).apply(np.floor) dfAcc from plt import * single(dfAcc, 'knnSingle.png', '#191970', '#F08080') ###Output _____no_output_____ ###Markdown BAGGING com a melhor single ###Code clf = knn(n_neighbors=n) model = BaggingClassifier(clf, n_estimators=5, random_state=0) sc = cross_val_score(model, baseScaled, Y, cv=cv) accArray = np.array([[sc.mean(), sc.std()*2]]) clf = knn(n_neighbors=n) model = BaggingClassifier(clf, n_estimators=10, random_state=0) sc = cross_val_score(model, baseScaled, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) clf = knn(n_neighbors=n) model = BaggingClassifier(clf, n_estimators=20, random_state=0) sc = cross_val_score(model, baseScaled, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) clf = knn(n_neighbors=n) model = BaggingClassifier(clf, n_estimators=30, random_state=0) sc = cross_val_score(model, baseScaled, Y, cv=cv) accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0) dfAcc = pd.DataFrame(accArray, columns=['mean', 'std'], index=None) dfAcc = (dfAcc*100).apply(np.floor) dfAcc bagging(dfAcc, 'knnBagging.png', '#191970', '#F08080') ###Output _____no_output_____
site/en-snapshot/probability/examples/Fitting_DPMM_Using_pSGLD.ipynb
###Markdown Copyright 2018 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference. Table of contents 1. Samples1. Model1. Optimization1. Visualize the result 4.1. Clustered result 4.2. Visualize uncertainty 4.3. Mean and scale of selected mixture component 4.4. Mixture weight of each mixture component 4.5. Convergence of $\alpha$ 4.6. Inferred number of clusters over iterations 4.7. Fitting the model using RMSProp1. Conclusion --- 1. Samples First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions. ###Code import time import numpy as np import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf import tensorflow_probability as tfp plt.style.use('ggplot') tfd = tfp.distributions def session_options(enable_gpu_ram_resizing=True): """Convenience function which sets common `tf.Session` options.""" config = tf.ConfigProto() config.log_device_placement = True if enable_gpu_ram_resizing: # `allow_growth=True` makes it possible to connect multiple colabs to your # GPU. Otherwise the colab malloc's all GPU ram. config.gpu_options.allow_growth = True return config def reset_sess(config=None): """Convenience function to create the TF graph and session, or reset them.""" if config is None: config = session_options() tf.reset_default_graph() global sess try: sess.close() except: pass sess = tf.InteractiveSession(config=config) # For reproducibility rng = np.random.RandomState(seed=45) tf.set_random_seed(76) # Precision dtype = np.float64 # Number of training samples num_samples = 50000 # Ground truth loc values which we will infer later on. The scale is 1. true_loc = np.array([[-4, -4], [0, 0], [4, 4]], dtype) true_components_num, dims = true_loc.shape # Generate training samples from ground truth loc true_hidden_component = rng.randint(0, true_components_num, num_samples) observations = (true_loc[true_hidden_component] + rng.randn(num_samples, dims).astype(dtype)) # Visualize samples plt.scatter(observations[:, 0], observations[:, 1], 1) plt.axis([-10, 10, -10, 10]) plt.show() ###Output _____no_output_____ ###Markdown 2. Model Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in\{1,\ldots,N\}$ samples, the model with a mixture of $j \in\{1,\ldots,K\}$ Gaussian distributions is formulated as follow:$$\begin{align*}p(\boldsymbol{x}_1,\cdots, \boldsymbol{x}_N) &=\prod_{i=1}^N \text{GMM}(x_i), \\&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\\ \end{align*}$$where:$$\begin{align*}x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}_{z_i},\,\text{scale}=\boldsymbol{\sigma}_{z_i}) \\z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\\&\,\quad \text{with}\;\boldsymbol{\pi}=\{\pi_1,\cdots,\pi_K\}\\\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}=\{\frac{\alpha}{K},\cdots,\frac{\alpha}{K}\})\\\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\\\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\\\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\\\end{align*}$$Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30. ###Code reset_sess() # Upperbound on K max_cluster_num = 30 # Define trainable variables. mix_probs = tf.nn.softmax( tf.Variable( name='mix_probs', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc = tf.Variable( name='loc', initial_value=np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision = tf.nn.softplus(tf.Variable( name='precision', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha = tf.nn.softplus(tf.Variable( name='alpha', initial_value= np.ones([1], dtype=dtype))) training_vals = [mix_probs, alpha, loc, precision] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num, name='rv_sdp') rv_loc = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc') rv_precision = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision') rv_alpha = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha') # Define mixture model rv_observations = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs), components_distribution=tfd.MultivariateNormalDiag( loc=loc, scale_diag=precision)) ###Output _____no_output_____ ###Markdown 3. Optimization We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner. To update parameters $\boldsymbol{\theta}\equiv\{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}\}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:$$\begin{align*}\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right) + \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\\&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\\\end{align*}$$In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter. ###Code # Learning rates and decay starter_learning_rate = 1e-6 end_learning_rate = 1e-10 decay_steps = 1e4 # Number of training steps training_steps = 10000 # Mini-batch size batch_size = 20 # Sample size for parameter posteriors sample_size = 100 ###Output _____no_output_____ ###Markdown We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.Note that as specified in the [API of pSGLD](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/StochasticGradientLangevinDynamics), we need to divide the sum of the prior probabilities by sample size $N$. ###Code # Placeholder for mini-batch observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims]) # Define joint log probabilities # Notice that each prior probability should be divided by num_samples and # likelihood is divided by batch_size for pSGLD optimization. log_prob_parts = [ rv_loc.log_prob(loc) / num_samples, rv_precision.log_prob(precision) / num_samples, rv_alpha.log_prob(alpha) / num_samples, rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis] / num_samples, rv_observations.log_prob(observations_tensor) / batch_size ] joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1) # Make mini-batch generator dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\ .shuffle(500).repeat().batch(batch_size) iterator = tf.compat.v1.data.make_one_shot_iterator(dx) next_batch = iterator.get_next() # Define learning rate scheduling global_step = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate, global_step, decay_steps, end_learning_rate, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics( learning_rate=learning_rate, preconditioner_decay_rate=0.99, burnin=1500, data_size=num_samples) train_op = optimizer_kernel.minimize(-joint_log_prob) # Arrays to store samples mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num]) mean_alpha_mtx = np.zeros([training_steps, 1]) mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims]) mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims]) init = tf.global_variables_initializer() sess.run(init) start = time.time() for it in range(training_steps): [ mean_mix_probs_mtx[it, :], mean_alpha_mtx[it, 0], mean_loc_mtx[it, :, :], mean_precision_mtx[it, :, :], _ ] = sess.run([ *training_vals, train_op ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_psgld = time.time() - start print("Elapsed time: {} seconds".format(elapsed_time_psgld)) # Take mean over the last sample_size iterations mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0) mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0) mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0) mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0) ###Output Elapsed time: 309.8013095855713 seconds ###Markdown 4. Visualize the result 4.1. Clustered result First, we visualize the result of clustering.For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:$$\begin{align*}j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})\end{align*}$$ ###Code loc_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='loc_for_posterior') precision_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='precision_for_posterior') mix_probs_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num], name='mix_probs_for_posterior') # Posterior of z (unnormalized) unnomarlized_posterior = tfd.MultivariateNormalDiag( loc=loc_for_posterior, scale_diag=precision_for_posterior)\ .log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\ + tf.log(mix_probs_for_posterior[tf.newaxis, ...]) # Posterior of z (normarizad over latent states) posterior = unnomarlized_posterior\ - tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis] cluster_asgmt = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]}) idxs, count = np.unique(cluster_asgmt, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) def convert_int_elements_to_consecutive_numbers_in(array): unique_int_elements = np.unique(array) for consecutive_number, unique_int_element in enumerate(unique_int_elements): array[array == unique_int_element] = consecutive_number return array cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 3 Number of elements in each cluster = [16911 16645 16444] ###Markdown We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well. 4.2. Visualize uncertainty Here, we look at the uncertainty of the clustering result by visualizing it for each sample.We calculate uncertainty by using entropy:$$\begin{align*}\text{Uncertainty}_\text{entropy} = -\frac{1}{K}\sum^{K}_{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\end{align*}$$In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments. ###Code # Calculate entropy posterior_in_exponential = tf.exp(posterior) uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum( posterior_in_exponential * posterior, axis=1), axis=1) uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :] }) plt.title('Entropy') sc = plt.scatter(observations[:, 0], observations[:, 1], 1, c=uncertainty_in_entropy_, cmap=plt.cm.viridis_r) cbar = plt.colorbar(sc, fraction=0.046, pad=0.04, ticks=[uncertainty_in_entropy_.min(), uncertainty_in_entropy_.max()]) cbar.ax.set_yticklabels(['low', 'high']) cbar.set_label('Uncertainty', rotation=270) plt.show() ###Output _____no_output_____ ###Markdown In the above graph, less luminance represents more uncertainty. We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster. 4.3. Mean and scale of selected mixture component Next, we look at selected clusters' $\mu_j$ and $\sigma_j$. ###Code for idx, numbe_of_samples in zip(idxs, count): print( 'Component id = {}, Number of elements = {}' .format(idx, numbe_of_samples)) print( 'Mean loc = {}, Mean scale = {}\n' .format(mean_loc_[idx, :], mean_precision_[idx, :])) ###Output Component id = 0, Number of elements = 16911 Mean loc = [-4.030 -4.113], Mean scale = [ 0.994 0.972] Component id = 4, Number of elements = 16645 Mean loc = [ 3.999 4.069], Mean scale = [ 1.038 1.046] Component id = 5, Number of elements = 16444 Mean loc = [-0.005 -0.023], Mean scale = [ 0.967 1.025] ###Markdown Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth. 4.4 Mixture weight of each mixture component We also look at inferred mixture weights. ###Code plt.ylabel('Mean posterior of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mean_mix_probs_) plt.show() ###Output _____no_output_____ ###Markdown We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples. 4.5. Convergence of $\alpha$ We look at convergence of Dirichlet distribution's concentration parameter $\alpha$. ###Code print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0])) plt.ylabel('Sample value of alpha') plt.xlabel('Iteration') plt.plot(mean_alpha_mtx) plt.show() ###Output Value of inferred alpha = 0.679 ###Markdown Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations. 4.6. Inferred number of clusters over iterations We visualize how the inferred number of clusters changes over iterations.To do so, we infer the number of clusters over the iterations. ###Code step = sample_size num_of_iterations = 50 estimated_num_of_clusters = [] interval = (training_steps - step) // (num_of_iterations - 1) iterations = np.asarray(range(step, training_steps+1, interval)) for iteration in iterations: start_position = iteration-step end_position = iteration result = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[start_position:end_position, :], precision_for_posterior: mean_precision_mtx[start_position:end_position, :], mix_probs_for_posterior: mean_mix_probs_mtx[start_position:end_position, :]}) idxs, count = np.unique(result, return_counts=True) estimated_num_of_clusters.append(len(count)) plt.ylabel('Number of inferred clusters') plt.xlabel('Iteration') plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1)) plt.plot(iterations - 1, estimated_num_of_clusters) plt.show() ###Output _____no_output_____ ###Markdown Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations. 4.7. Fitting the model using RMSProp In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp. ###Code # Learning rates and decay starter_learning_rate_rmsprop = 1e-2 end_learning_rate_rmsprop = 1e-4 decay_steps_rmsprop = 1e4 # Number of training steps training_steps_rmsprop = 50000 # Mini-batch size batch_size_rmsprop = 20 # Define trainable variables. mix_probs_rmsprop = tf.nn.softmax( tf.Variable( name='mix_probs_rmsprop', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc_rmsprop = tf.Variable( name='loc_rmsprop', initial_value=np.zeros([max_cluster_num, dims], dtype) + np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision_rmsprop = tf.nn.softplus(tf.Variable( name='precision_rmsprop', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha_rmsprop = tf.nn.softplus(tf.Variable( name='alpha_rmsprop', initial_value= np.ones([1], dtype=dtype))) training_vals_rmsprop =\ [mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha_rmsprop / max_cluster_num, name='rv_sdp_rmsprop') rv_loc_rmsprop = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc_rmsprop') rv_precision_rmsprop = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision_rmsprop') rv_alpha_rmsprop = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha_rmsprop') # Define mixture model rv_observations_rmsprop = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop), components_distribution=tfd.MultivariateNormalDiag( loc=loc_rmsprop, scale_diag=precision_rmsprop)) og_prob_parts_rmsprop = [ rv_loc_rmsprop.log_prob(loc_rmsprop), rv_precision_rmsprop.log_prob(precision_rmsprop), rv_alpha_rmsprop.log_prob(alpha_rmsprop), rv_symmetric_dirichlet_process_rmsprop .log_prob(mix_probs_rmsprop)[..., tf.newaxis], rv_observations_rmsprop.log_prob(observations_tensor) * num_samples / batch_size ] joint_log_prob_rmsprop = tf.reduce_sum( tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1) # Define learning rate scheduling global_step_rmsprop = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate_rmsprop, global_step_rmsprop, decay_steps_rmsprop, end_learning_rate_rmsprop, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer( learning_rate=learning_rate, decay=0.99) train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop) init_rmsprop = tf.global_variables_initializer() sess.run(init_rmsprop) start = time.time() for it in range(training_steps_rmsprop): [ _ ] = sess.run([ train_op_rmsprop ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_rmsprop = time.time() - start print("RMSProp elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_rmsprop, training_steps_rmsprop)) print("pSGLD elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_psgld, training_steps)) mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\ sess.run(training_vals_rmsprop) ###Output RMSProp elapsed_time: 53.7574200630188 seconds (50000 iterations) pSGLD elapsed_time: 309.8013095855713 seconds (10000 iterations) ###Markdown Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.Next, we look at the clustering result. ###Code cluster_asgmt_rmsprop = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: loc_rmsprop_[tf.newaxis, :], precision_for_posterior: precision_rmsprop_[tf.newaxis, :], mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]}) idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in( cluster_asgmt_rmsprop))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 4 Number of elements in each cluster = [ 1644 15267 16647 16442] ###Markdown The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight. ###Code plt.ylabel('MAP inferece of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_) plt.show() ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference. Table of contents 1. Samples1. Model1. Optimization1. Visualize the result 4.1. Clustered result 4.2. Visualize uncertainty 4.3. Mean and scale of selected mixture component 4.4. Mixture weight of each mixture component 4.5. Convergence of $\alpha$ 4.6. Inferred number of clusters over iterations 4.7. Fitting the model using RMSProp1. Conclusion --- 1. Samples First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions. ###Code import time import numpy as np import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf import tensorflow_probability as tfp plt.style.use('ggplot') tfd = tfp.distributions def session_options(enable_gpu_ram_resizing=True): """Convenience function which sets common `tf.Session` options.""" config = tf.ConfigProto() config.log_device_placement = True if enable_gpu_ram_resizing: # `allow_growth=True` makes it possible to connect multiple colabs to your # GPU. Otherwise the colab malloc's all GPU ram. config.gpu_options.allow_growth = True return config def reset_sess(config=None): """Convenience function to create the TF graph and session, or reset them.""" if config is None: config = session_options() tf.reset_default_graph() global sess try: sess.close() except: pass sess = tf.InteractiveSession(config=config) # For reproducibility rng = np.random.RandomState(seed=45) tf.set_random_seed(76) # Precision dtype = np.float64 # Number of training samples num_samples = 50000 # Ground truth loc values which we will infer later on. The scale is 1. true_loc = np.array([[-4, -4], [0, 0], [4, 4]], dtype) true_components_num, dims = true_loc.shape # Generate training samples from ground truth loc true_hidden_component = rng.randint(0, true_components_num, num_samples) observations = (true_loc[true_hidden_component] + rng.randn(num_samples, dims).astype(dtype)) # Visualize samples plt.scatter(observations[:, 0], observations[:, 1], 1) plt.axis([-10, 10, -10, 10]) plt.show() ###Output _____no_output_____ ###Markdown 2. Model Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in\{1,\ldots,N\}$ samples, the model with a mixture of $j \in\{1,\ldots,K\}$ Gaussian distributions is formulated as follow:$$\begin{align*}p(\boldsymbol{x}_1,\cdots, \boldsymbol{x}_N) &=\prod_{i=1}^N \text{GMM}(x_i), \\&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\\ \end{align*}$$where:$$\begin{align*}x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}_{z_i},\,\text{scale}=\boldsymbol{\sigma}_{z_i}) \\z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\\&\,\quad \text{with}\;\boldsymbol{\pi}=\{\pi_1,\cdots,\pi_K\}\\\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}=\{\frac{\alpha}{K},\cdots,\frac{\alpha}{K}\})\\\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\\\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\\\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\\\end{align*}$$Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30. ###Code reset_sess() # Upperbound on K max_cluster_num = 30 # Define trainable variables. mix_probs = tf.nn.softmax( tf.Variable( name='mix_probs', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc = tf.Variable( name='loc', initial_value=np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision = tf.nn.softplus(tf.Variable( name='precision', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha = tf.nn.softplus(tf.Variable( name='alpha', initial_value= np.ones([1], dtype=dtype))) training_vals = [mix_probs, alpha, loc, precision] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num, name='rv_sdp') rv_loc = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc') rv_precision = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision') rv_alpha = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha') # Define mixture model rv_observations = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs), components_distribution=tfd.MultivariateNormalDiag( loc=loc, scale_diag=precision)) ###Output _____no_output_____ ###Markdown 3. Optimization We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner. To update parameters $\boldsymbol{\theta}\equiv\{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}\}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:$$\begin{align*}\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right) + \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\\&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\\\end{align*}$$In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter. ###Code # Learning rates and decay starter_learning_rate = 1e-6 end_learning_rate = 1e-10 decay_steps = 1e4 # Number of training steps training_steps = 10000 # Mini-batch size batch_size = 20 # Sample size for parameter posteriors sample_size = 100 ###Output _____no_output_____ ###Markdown We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.Note that as specified in the [API of pSGLD](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/StochasticGradientLangevinDynamics), we need to divide the sum of the prior probabilities by sample size $N$. ###Code # Placeholder for mini-batch observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims]) # Define joint log probabilities # Notice that each prior probability should be divided by num_samples and # likelihood is divided by batch_size for pSGLD optimization. log_prob_parts = [ rv_loc.log_prob(loc) / num_samples, rv_precision.log_prob(precision) / num_samples, rv_alpha.log_prob(alpha) / num_samples, rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis] / num_samples, rv_observations.log_prob(observations_tensor) / batch_size ] joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1) # Make mini-batch generator dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\ .shuffle(500).repeat().batch(batch_size) iterator = tf.compat.v1.data.make_one_shot_iterator(dx) next_batch = iterator.get_next() # Define learning rate scheduling global_step = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate, global_step, decay_steps, end_learning_rate, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics( learning_rate=learning_rate, preconditioner_decay_rate=0.99, burnin=1500, data_size=num_samples) train_op = optimizer_kernel.minimize(-joint_log_prob) # Arrays to store samples mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num]) mean_alpha_mtx = np.zeros([training_steps, 1]) mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims]) mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims]) init = tf.global_variables_initializer() sess.run(init) start = time.time() for it in range(training_steps): [ mean_mix_probs_mtx[it, :], mean_alpha_mtx[it, 0], mean_loc_mtx[it, :, :], mean_precision_mtx[it, :, :], _ ] = sess.run([ *training_vals, train_op ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_psgld = time.time() - start print("Elapsed time: {} seconds".format(elapsed_time_psgld)) # Take mean over the last sample_size iterations mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0) mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0) mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0) mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0) ###Output Elapsed time: 309.8013095855713 seconds ###Markdown 4. Visualize the result 4.1. Clustered result First, we visualize the result of clustering.For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:$$\begin{align*}j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})\end{align*}$$ ###Code loc_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='loc_for_posterior') precision_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='precision_for_posterior') mix_probs_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num], name='mix_probs_for_posterior') # Posterior of z (unnormalized) unnomarlized_posterior = tfd.MultivariateNormalDiag( loc=loc_for_posterior, scale_diag=precision_for_posterior)\ .log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\ + tf.log(mix_probs_for_posterior[tf.newaxis, ...]) # Posterior of z (normarizad over latent states) posterior = unnomarlized_posterior\ - tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis] cluster_asgmt = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]}) idxs, count = np.unique(cluster_asgmt, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) def convert_int_elements_to_consecutive_numbers_in(array): unique_int_elements = np.unique(array) for consecutive_number, unique_int_element in enumerate(unique_int_elements): array[array == unique_int_element] = consecutive_number return array cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 3 Number of elements in each cluster = [16911 16645 16444] ###Markdown We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well. 4.2. Visualize uncertainty Here, we look at the uncertainty of the clustering result by visualizing it for each sample.We calculate uncertainty by using entropy:$$\begin{align*}\text{Uncertainty}_\text{entropy} = -\frac{1}{K}\sum^{K}_{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\end{align*}$$In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments. ###Code # Calculate entropy posterior_in_exponential = tf.exp(posterior) uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum( posterior_in_exponential * posterior, axis=1), axis=1) uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :] }) plt.title('Entropy') sc = plt.scatter(observations[:, 0], observations[:, 1], 1, c=uncertainty_in_entropy_, cmap=plt.cm.viridis_r) cbar = plt.colorbar(sc, fraction=0.046, pad=0.04, ticks=[uncertainty_in_entropy_.min(), uncertainty_in_entropy_.max()]) cbar.ax.set_yticklabels(['low', 'high']) cbar.set_label('Uncertainty', rotation=270) plt.show() ###Output _____no_output_____ ###Markdown In the above graph, less luminance represents more uncertainty. We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster. 4.3. Mean and scale of selected mixture component Next, we look at selected clusters' $\mu_j$ and $\sigma_j$. ###Code for idx, numbe_of_samples in zip(idxs, count): print( 'Component id = {}, Number of elements = {}' .format(idx, numbe_of_samples)) print( 'Mean loc = {}, Mean scale = {}\n' .format(mean_loc_[idx, :], mean_precision_[idx, :])) ###Output Component id = 0, Number of elements = 16911 Mean loc = [-4.030 -4.113], Mean scale = [ 0.994 0.972] Component id = 4, Number of elements = 16645 Mean loc = [ 3.999 4.069], Mean scale = [ 1.038 1.046] Component id = 5, Number of elements = 16444 Mean loc = [-0.005 -0.023], Mean scale = [ 0.967 1.025] ###Markdown Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth. 4.4 Mixture weight of each mixture component We also look at inferred mixture weights. ###Code plt.ylabel('Mean posterior of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mean_mix_probs_) plt.show() ###Output _____no_output_____ ###Markdown We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples. 4.5. Convergence of $\alpha$ We look at convergence of Dirichlet distribution's concentration parameter $\alpha$. ###Code print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0])) plt.ylabel('Sample value of alpha') plt.xlabel('Iteration') plt.plot(mean_alpha_mtx) plt.show() ###Output Value of inferred alpha = 0.679 ###Markdown Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations. 4.6. Inferred number of clusters over iterations We visualize how the inferred number of clusters changes over iterations.To do so, we infer the number of clusters over the iterations. ###Code step = sample_size num_of_iterations = 50 estimated_num_of_clusters = [] interval = (training_steps - step) // (num_of_iterations - 1) iterations = np.asarray(range(step, training_steps+1, interval)) for iteration in iterations: start_position = iteration-step end_position = iteration result = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[start_position:end_position, :], precision_for_posterior: mean_precision_mtx[start_position:end_position, :], mix_probs_for_posterior: mean_mix_probs_mtx[start_position:end_position, :]}) idxs, count = np.unique(result, return_counts=True) estimated_num_of_clusters.append(len(count)) plt.ylabel('Number of inferred clusters') plt.xlabel('Iteration') plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1)) plt.plot(iterations - 1, estimated_num_of_clusters) plt.show() ###Output _____no_output_____ ###Markdown Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations. 4.7. Fitting the model using RMSProp In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp. ###Code # Learning rates and decay starter_learning_rate_rmsprop = 1e-2 end_learning_rate_rmsprop = 1e-4 decay_steps_rmsprop = 1e4 # Number of training steps training_steps_rmsprop = 50000 # Mini-batch size batch_size_rmsprop = 20 # Define trainable variables. mix_probs_rmsprop = tf.nn.softmax( tf.Variable( name='mix_probs_rmsprop', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc_rmsprop = tf.Variable( name='loc_rmsprop', initial_value=np.zeros([max_cluster_num, dims], dtype) + np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision_rmsprop = tf.nn.softplus(tf.Variable( name='precision_rmsprop', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha_rmsprop = tf.nn.softplus(tf.Variable( name='alpha_rmsprop', initial_value= np.ones([1], dtype=dtype))) training_vals_rmsprop =\ [mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha_rmsprop / max_cluster_num, name='rv_sdp_rmsprop') rv_loc_rmsprop = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc_rmsprop') rv_precision_rmsprop = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision_rmsprop') rv_alpha_rmsprop = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha_rmsprop') # Define mixture model rv_observations_rmsprop = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop), components_distribution=tfd.MultivariateNormalDiag( loc=loc_rmsprop, scale_diag=precision_rmsprop)) og_prob_parts_rmsprop = [ rv_loc_rmsprop.log_prob(loc_rmsprop), rv_precision_rmsprop.log_prob(precision_rmsprop), rv_alpha_rmsprop.log_prob(alpha_rmsprop), rv_symmetric_dirichlet_process_rmsprop .log_prob(mix_probs_rmsprop)[..., tf.newaxis], rv_observations_rmsprop.log_prob(observations_tensor) * num_samples / batch_size ] joint_log_prob_rmsprop = tf.reduce_sum( tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1) # Define learning rate scheduling global_step_rmsprop = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate_rmsprop, global_step_rmsprop, decay_steps_rmsprop, end_learning_rate_rmsprop, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer( learning_rate=learning_rate, decay=0.99) train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop) init_rmsprop = tf.global_variables_initializer() sess.run(init_rmsprop) start = time.time() for it in range(training_steps_rmsprop): [ _ ] = sess.run([ train_op_rmsprop ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_rmsprop = time.time() - start print("RMSProp elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_rmsprop, training_steps_rmsprop)) print("pSGLD elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_psgld, training_steps)) mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\ sess.run(training_vals_rmsprop) ###Output RMSProp elapsed_time: 53.7574200630188 seconds (50000 iterations) pSGLD elapsed_time: 309.8013095855713 seconds (10000 iterations) ###Markdown Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.Next, we look at the clustering result. ###Code cluster_asgmt_rmsprop = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: loc_rmsprop_[tf.newaxis, :], precision_for_posterior: precision_rmsprop_[tf.newaxis, :], mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]}) idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in( cluster_asgmt_rmsprop))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 4 Number of elements in each cluster = [ 1644 15267 16647 16442] ###Markdown The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight. ###Code plt.ylabel('MAP inferece of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_) plt.show() ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License"); ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference. Table of contents 1. Samples1. Model1. Optimization1. Visualize the result 4.1. Clustered result 4.2. Visualize uncertainty 4.3. Mean and scale of selected mixture component 4.4. Mixture weight of each mixture component 4.5. Convergence of $\alpha$ 4.6. Inferred number of clusters over iterations 4.7. Fitting the model using RMSProp1. Conclusion --- 1. Samples First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions. ###Code import time import numpy as np import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf import tensorflow_probability as tfp plt.style.use('ggplot') tfd = tfp.distributions def session_options(enable_gpu_ram_resizing=True): """Convenience function which sets common `tf.Session` options.""" config = tf.ConfigProto() config.log_device_placement = True if enable_gpu_ram_resizing: # `allow_growth=True` makes it possible to connect multiple colabs to your # GPU. Otherwise the colab malloc's all GPU ram. config.gpu_options.allow_growth = True return config def reset_sess(config=None): """Convenience function to create the TF graph and session, or reset them.""" if config is None: config = session_options() tf.reset_default_graph() global sess try: sess.close() except: pass sess = tf.InteractiveSession(config=config) # For reproducibility rng = np.random.RandomState(seed=45) tf.set_random_seed(76) # Precision dtype = np.float64 # Number of training samples num_samples = 50000 # Ground truth loc values which we will infer later on. The scale is 1. true_loc = np.array([[-4, -4], [0, 0], [4, 4]], dtype) true_components_num, dims = true_loc.shape # Generate training samples from ground truth loc true_hidden_component = rng.randint(0, true_components_num, num_samples) observations = (true_loc[true_hidden_component] + rng.randn(num_samples, dims).astype(dtype)) # Visualize samples plt.scatter(observations[:, 0], observations[:, 1], 1) plt.axis([-10, 10, -10, 10]) plt.show() ###Output _____no_output_____ ###Markdown 2. Model Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in\{1,\ldots,N\}$ samples, the model with a mixture of $j \in\{1,\ldots,K\}$ Gaussian distributions is formulated as follow:$$\begin{align*}p(\boldsymbol{x}_1,\cdots, \boldsymbol{x}_N) &=\prod_{i=1}^N \text{GMM}(x_i), \\&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\\ \end{align*}$$where:$$\begin{align*}x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}_{z_i},\,\text{scale}=\boldsymbol{\sigma}_{z_i}) \\z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\\&\,\quad \text{with}\;\boldsymbol{\pi}=\{\pi_1,\cdots,\pi_K\}\\\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}=\{\frac{\alpha}{K},\cdots,\frac{\alpha}{K}\})\\\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\\\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\\\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\\\end{align*}$$Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30. ###Code reset_sess() # Upperbound on K max_cluster_num = 30 # Define trainable variables. mix_probs = tf.nn.softmax( tf.Variable( name='mix_probs', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc = tf.Variable( name='loc', initial_value=np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision = tf.nn.softplus(tf.Variable( name='precision', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha = tf.nn.softplus(tf.Variable( name='alpha', initial_value= np.ones([1], dtype=dtype))) training_vals = [mix_probs, alpha, loc, precision] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num, name='rv_sdp') rv_loc = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc') rv_precision = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision') rv_alpha = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha') # Define mixture model rv_observations = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs), components_distribution=tfd.MultivariateNormalDiag( loc=loc, scale_diag=precision)) ###Output _____no_output_____ ###Markdown 3. Optimization We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner. To update parameters $\boldsymbol{\theta}\equiv\{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}\}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:$$\begin{align*}\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right) + \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\\&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\\\end{align*}$$In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter. ###Code # Learning rates and decay starter_learning_rate = 1e-6 end_learning_rate = 1e-10 decay_steps = 1e4 # Number of training steps training_steps = 10000 # Mini-batch size batch_size = 20 # Sample size for parameter posteriors sample_size = 100 ###Output _____no_output_____ ###Markdown We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.Note that as specified in the [API of pSGLD](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/StochasticGradientLangevinDynamics), we need to divide the sum of the prior probabilities by sample size $N$. ###Code # Placeholder for mini-batch observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims]) # Define joint log probabilities # Notice that each prior probability should be divided by num_samples and # likelihood is divided by batch_size for pSGLD optimization. log_prob_parts = [ rv_loc.log_prob(loc) / num_samples, rv_precision.log_prob(precision) / num_samples, rv_alpha.log_prob(alpha) / num_samples, rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis] / num_samples, rv_observations.log_prob(observations_tensor) / batch_size ] joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1) # Make mini-batch generator dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\ .shuffle(500).repeat().batch(batch_size) iterator = tf.compat.v1.data.make_one_shot_iterator(dx) next_batch = iterator.get_next() # Define learning rate scheduling global_step = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate, global_step, decay_steps, end_learning_rate, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics( learning_rate=learning_rate, preconditioner_decay_rate=0.99, burnin=1500, data_size=num_samples) train_op = optimizer_kernel.minimize(-joint_log_prob) # Arrays to store samples mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num]) mean_alpha_mtx = np.zeros([training_steps, 1]) mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims]) mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims]) init = tf.global_variables_initializer() sess.run(init) start = time.time() for it in range(training_steps): [ mean_mix_probs_mtx[it, :], mean_alpha_mtx[it, 0], mean_loc_mtx[it, :, :], mean_precision_mtx[it, :, :], _ ] = sess.run([ *training_vals, train_op ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_psgld = time.time() - start print("Elapsed time: {} seconds".format(elapsed_time_psgld)) # Take mean over the last sample_size iterations mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0) mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0) mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0) mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0) ###Output Elapsed time: 309.8013095855713 seconds ###Markdown 4. Visualize the result 4.1. Clustered result First, we visualize the result of clustering.For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:$$\begin{align*}j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})\end{align*}$$ ###Code loc_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='loc_for_posterior') precision_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num, dims], name='precision_for_posterior') mix_probs_for_posterior = tf.compat.v1.placeholder( dtype, [None, max_cluster_num], name='mix_probs_for_posterior') # Posterior of z (unnormalized) unnomarlized_posterior = tfd.MultivariateNormalDiag( loc=loc_for_posterior, scale_diag=precision_for_posterior)\ .log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\ + tf.log(mix_probs_for_posterior[tf.newaxis, ...]) # Posterior of z (normarizad over latent states) posterior = unnomarlized_posterior\ - tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis] cluster_asgmt = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]}) idxs, count = np.unique(cluster_asgmt, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) def convert_int_elements_to_consecutive_numbers_in(array): unique_int_elements = np.unique(array) for consecutive_number, unique_int_element in enumerate(unique_int_elements): array[array == unique_int_element] = consecutive_number return array cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 3 Number of elements in each cluster = [16911 16645 16444] ###Markdown We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well. 4.2. Visualize uncertainty Here, we look at the uncertainty of the clustering result by visualizing it for each sample.We calculate uncertainty by using entropy:$$\begin{align*}\text{Uncertainty}_\text{entropy} = -\frac{1}{K}\sum^{K}_{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\end{align*}$$In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments. ###Code # Calculate entropy posterior_in_exponential = tf.exp(posterior) uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum( posterior_in_exponential * posterior, axis=1), axis=1) uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={ loc_for_posterior: mean_loc_mtx[-sample_size:, :], precision_for_posterior: mean_precision_mtx[-sample_size:, :], mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :] }) plt.title('Entropy') sc = plt.scatter(observations[:, 0], observations[:, 1], 1, c=uncertainty_in_entropy_, cmap=plt.cm.viridis_r) cbar = plt.colorbar(sc, fraction=0.046, pad=0.04, ticks=[uncertainty_in_entropy_.min(), uncertainty_in_entropy_.max()]) cbar.ax.set_yticklabels(['low', 'high']) cbar.set_label('Uncertainty', rotation=270) plt.show() ###Output _____no_output_____ ###Markdown In the above graph, less luminance represents more uncertainty. We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster. 4.3. Mean and scale of selected mixture component Next, we look at selected clusters' $\mu_j$ and $\sigma_j$. ###Code for idx, numbe_of_samples in zip(idxs, count): print( 'Component id = {}, Number of elements = {}' .format(idx, numbe_of_samples)) print( 'Mean loc = {}, Mean scale = {}\n' .format(mean_loc_[idx, :], mean_precision_[idx, :])) ###Output Component id = 0, Number of elements = 16911 Mean loc = [-4.030 -4.113], Mean scale = [ 0.994 0.972] Component id = 4, Number of elements = 16645 Mean loc = [ 3.999 4.069], Mean scale = [ 1.038 1.046] Component id = 5, Number of elements = 16444 Mean loc = [-0.005 -0.023], Mean scale = [ 0.967 1.025] ###Markdown Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth. 4.4 Mixture weight of each mixture component We also look at inferred mixture weights. ###Code plt.ylabel('Mean posterior of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mean_mix_probs_) plt.show() ###Output _____no_output_____ ###Markdown We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples. 4.5. Convergence of $\alpha$ We look at convergence of Dirichlet distribution's concentration parameter $\alpha$. ###Code print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0])) plt.ylabel('Sample value of alpha') plt.xlabel('Iteration') plt.plot(mean_alpha_mtx) plt.show() ###Output Value of inferred alpha = 0.679 ###Markdown Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations. 4.6. Inferred number of clusters over iterations We visualize how the inferred number of clusters changes over iterations.To do so, we infer the number of clusters over the iterations. ###Code step = sample_size num_of_iterations = 50 estimated_num_of_clusters = [] interval = (training_steps - step) // (num_of_iterations - 1) iterations = np.asarray(range(step, training_steps+1, interval)) for iteration in iterations: start_position = iteration-step end_position = iteration result = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: mean_loc_mtx[start_position:end_position, :], precision_for_posterior: mean_precision_mtx[start_position:end_position, :], mix_probs_for_posterior: mean_mix_probs_mtx[start_position:end_position, :]}) idxs, count = np.unique(result, return_counts=True) estimated_num_of_clusters.append(len(count)) plt.ylabel('Number of inferred clusters') plt.xlabel('Iteration') plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1)) plt.plot(iterations - 1, estimated_num_of_clusters) plt.show() ###Output _____no_output_____ ###Markdown Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations. 4.7. Fitting the model using RMSProp In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp. ###Code # Learning rates and decay starter_learning_rate_rmsprop = 1e-2 end_learning_rate_rmsprop = 1e-4 decay_steps_rmsprop = 1e4 # Number of training steps training_steps_rmsprop = 50000 # Mini-batch size batch_size_rmsprop = 20 # Define trainable variables. mix_probs_rmsprop = tf.nn.softmax( tf.Variable( name='mix_probs_rmsprop', initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num)) loc_rmsprop = tf.Variable( name='loc_rmsprop', initial_value=np.zeros([max_cluster_num, dims], dtype) + np.random.uniform( low=-9, #set around minimum value of sample value high=9, #set around maximum value of sample value size=[max_cluster_num, dims])) precision_rmsprop = tf.nn.softplus(tf.Variable( name='precision_rmsprop', initial_value= np.ones([max_cluster_num, dims], dtype=dtype))) alpha_rmsprop = tf.nn.softplus(tf.Variable( name='alpha_rmsprop', initial_value= np.ones([1], dtype=dtype))) training_vals_rmsprop =\ [mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop] # Prior distributions of the training variables #Use symmetric Dirichlet prior as finite approximation of Dirichlet process. rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet( concentration=np.ones(max_cluster_num, dtype) * alpha_rmsprop / max_cluster_num, name='rv_sdp_rmsprop') rv_loc_rmsprop = tfd.Independent( tfd.Normal( loc=tf.zeros([max_cluster_num, dims], dtype=dtype), scale=tf.ones([max_cluster_num, dims], dtype=dtype)), reinterpreted_batch_ndims=1, name='rv_loc_rmsprop') rv_precision_rmsprop = tfd.Independent( tfd.InverseGamma( concentration=np.ones([max_cluster_num, dims], dtype), rate=np.ones([max_cluster_num, dims], dtype)), reinterpreted_batch_ndims=1, name='rv_precision_rmsprop') rv_alpha_rmsprop = tfd.InverseGamma( concentration=np.ones([1], dtype=dtype), rate=np.ones([1]), name='rv_alpha_rmsprop') # Define mixture model rv_observations_rmsprop = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop), components_distribution=tfd.MultivariateNormalDiag( loc=loc_rmsprop, scale_diag=precision_rmsprop)) og_prob_parts_rmsprop = [ rv_loc_rmsprop.log_prob(loc_rmsprop), rv_precision_rmsprop.log_prob(precision_rmsprop), rv_alpha_rmsprop.log_prob(alpha_rmsprop), rv_symmetric_dirichlet_process_rmsprop .log_prob(mix_probs_rmsprop)[..., tf.newaxis], rv_observations_rmsprop.log_prob(observations_tensor) * num_samples / batch_size ] joint_log_prob_rmsprop = tf.reduce_sum( tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1) # Define learning rate scheduling global_step_rmsprop = tf.Variable(0, trainable=False) learning_rate = tf.train.polynomial_decay( starter_learning_rate_rmsprop, global_step_rmsprop, decay_steps_rmsprop, end_learning_rate_rmsprop, power=1.) # Set up the optimizer. Don't forget to set data_size=num_samples. optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer( learning_rate=learning_rate, decay=0.99) train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop) init_rmsprop = tf.global_variables_initializer() sess.run(init_rmsprop) start = time.time() for it in range(training_steps_rmsprop): [ _ ] = sess.run([ train_op_rmsprop ], feed_dict={ observations_tensor: sess.run(next_batch)}) elapsed_time_rmsprop = time.time() - start print("RMSProp elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_rmsprop, training_steps_rmsprop)) print("pSGLD elapsed_time: {} seconds ({} iterations)" .format(elapsed_time_psgld, training_steps)) mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\ sess.run(training_vals_rmsprop) ###Output RMSProp elapsed_time: 53.7574200630188 seconds (50000 iterations) pSGLD elapsed_time: 309.8013095855713 seconds (10000 iterations) ###Markdown Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.Next, we look at the clustering result. ###Code cluster_asgmt_rmsprop = sess.run(tf.argmax( tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={ loc_for_posterior: loc_rmsprop_[tf.newaxis, :], precision_for_posterior: precision_rmsprop_[tf.newaxis, :], mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]}) idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True) print('Number of inferred clusters = {}\n'.format(len(count))) np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print('Number of elements in each cluster = {}\n'.format(count)) cmap = plt.get_cmap('tab10') plt.scatter( observations[:, 0], observations[:, 1], 1, c=cmap(convert_int_elements_to_consecutive_numbers_in( cluster_asgmt_rmsprop))) plt.axis([-10, 10, -10, 10]) plt.show() ###Output Number of inferred clusters = 4 Number of elements in each cluster = [ 1644 15267 16647 16442] ###Markdown The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight. ###Code plt.ylabel('MAP inferece of mixture weight') plt.xlabel('Component') plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_) plt.show() ###Output _____no_output_____
p1-lane-lines/P1.ipynb
###Markdown Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages ###Code #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ###Output _____no_output_____ ###Markdown Read in an Image ###Code #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ###Output _____no_output_____ ###Markdown Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ###Code import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img, lines # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ) ###Output _____no_output_____ ###Markdown Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ###Code import os image = mpimg.imread('test_images/solidWhiteRight.jpg') os.listdir("test_images/") ###Output _____no_output_____ ###Markdown Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ###Code # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images directory. def create_grayscale_image(image): """ Convert a color image to grayscale. """ gray_img = grayscale(image) return gray_img gray_img = create_grayscale_image(image) plt.imshow(gray_img, cmap='gray') def create_blur_image(image, kernel_size): """ Apply Gaussian blur to an image. """ blur_img = gaussian_blur(image, kernel_size) return blur_img blur_img = create_blur_image(gray_img, 7) plt.imshow(blur_img, cmap='gray') def create_canny_image(image, low, high): """ Perform Canny edge detection on image. """ canny_img = canny(image, low, high) return canny_img canny_img = create_canny_image(blur_img, 100, 250) plt.imshow(canny_img) def find_lines_roi(image): """ Find all lines within a region of interest. """ xs = image.shape[1] ys = image.shape[0] verts = np.array([[(xs / 10, ys), (9 * xs / 10, ys), (21 * xs / 40, ys / 2), (19 * xs / 40, ys / 2)]], dtype=np.int32) roi_img = region_of_interest(image, verts) cv2.polylines(roi_img, [verts], True, (0,255,255), 3) return roi_img roi_img = find_lines_roi(canny_img) plt.imshow(roi_img) def create_hough_image(image): """ Perform Hough line detection on image. """ rho = 2 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 25 # minimum number of votes (intersections in Hough grid cell) min_line_len = 40 # minimum number of pixels making up a line max_line_gap = 50 # maximum gap in pixels between connectable line segments hough_img, lines = hough_lines(image, rho, theta, threshold, min_line_len, max_line_gap) return hough_img, lines hough_img, lines = create_hough_image(roi_img) plt.imshow(hough_img) ###Output _____no_output_____ ###Markdown Things to try: * filter based on color (HSV?) to get white and yellow * try to distinguish solid and dashed lines * for final image overlay use different colored lines for different lane line types * calculate curvature and reject lines if curvature changes too rapidly * use a moving average so that we do not reject sharp turns ###Code from sys import maxsize from math import sqrt def intersection_point(x1, x2, y1, y2, m1, m2): """ Calculate intersection point between two lines. The lines should be defined by: y = mx + b. """ b1 = y1 - m1 * x1 b2 = y2 - m2 * x2 x = (b2 - b1) / (m1 - m2) y = m1 * x + b1 return int(x), int(y) def extract_lines(lines_img, orig_img, lines): """ Extract road lane lines from an image. """ # Get slope of each line and find the longest line with positive slope # and the longest line with negative slope where each of the lines must # have one of its end points in the bottom fraction (1/4?) of the image. pos_line = [] neg_line = [] pos_x_int = -maxsize neg_x_int = maxsize xs = lines_img.shape[1] ys = lines_img.shape[0] max_pos_len = 0 max_neg_len = 0 pos_m = 0 neg_m = 0 pos_lines = {} neg_lines = {} pos_lines['lines'] = [] pos_lines['m'] = [] neg_lines['lines'] = [] neg_lines['m'] = [] for line in lines: # Make sure all points are int, not float. i = 0 for pt in line[0]: new_pt = int(round(pt)) lines[0][0][i] = new_pt i = i + 1 for x1, y1, x2, y2 in line: dx = x2 - x1 if dx == 0: # Protect against divide by zero. if not pos_line.any(): pos_line = line pos_lines.append(line) elif not neg_line.any(): neg_line = line neg_lines.append(line) else: continue dy = y2 - y1 m = dy / dx line_len = dx * dx + dy * dy # Not necessary to take sqrt() so we can save cycles. if m > 0: if line_len > max_pos_len: pos_slope_max = m pos_line = line pos_x_int = int(round((ys - y1) / m + x1)) max_pos_len = line_len pos_m = m pos_lines['lines'].append(line) pos_lines['m'].append(m) elif m < 0: if line_len > max_neg_len: neg_slope_max = m neg_line = line neg_x_int = int(round((ys - y1) / m + x1)) max_neg_len = line_len neg_m = m neg_lines['lines'].append(line) neg_lines['m'].append(m) lane_lines = [] lane_lines.append(pos_line) lane_lines.append(neg_line) pos_line_ext = [[pos_line[0][0], pos_line[0][1], pos_x_int, ys]] neg_line_ext = [[neg_line[0][0], neg_line[0][1], neg_x_int, ys]] lane_lines.append(pos_line_ext) lane_lines.append(neg_line_ext) pos_m = np.mean(pos_lines['m']) neg_m = np.mean(neg_lines['m']) x_int, y_int = intersection_point(pos_line[0][0], neg_line[0][0], pos_line[0][1], neg_line[0][1], pos_m, neg_m) pos_line_int = [[pos_line[0][0], pos_line[0][1], x_int, y_int]] neg_line_int = [[neg_line[0][0], neg_line[0][1], x_int, y_int]] lane_lines.append(pos_line_int) lane_lines.append(neg_line_int) draw_img = orig_img.copy() draw_lines(draw_img, lane_lines, color=(0, 0, 255), thickness=5) #print('num_pos_lines: ', num_pos_lines, 'num_neg_lines', num_neg_lines) #print('pos_line: ', pos_line, 'pos_x_int: ', pos_x_int) #print('neg_line: ', neg_line, 'neg_x_int: ', neg_x_int) #print('pos_line_ext', pos_line_ext) #print('neg_line_ext', neg_line_ext) return draw_img lane_lines_img = extract_lines(hough_img, image, lines) plt.imshow(lane_lines_img) def detect_lane_lines(image): """ Detect lane lines in an image and annotate the image. """ orig_img = image.copy() gray_img = create_grayscale_image(orig_img) blur_img = create_blur_image(gray_img, 7) canny_img = create_canny_image(blur_img, 100, 250) roi_img = find_lines_roi(canny_img) hough_img, lines = create_hough_image(roi_img) lane_lines_img = extract_lines(hough_img, image, lines) return lane_lines_img line_detect_img = detect_lane_lines(image) plt.imshow(line_detect_img) ###Output _____no_output_____ ###Markdown Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ###Code # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) result = detect_lane_lines(image) return result test_img = process_image(image) plt.imshow(test_img) ###Output _____no_output_____ ###Markdown Let's try the one with the solid white lane on the right first ... ###Code white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ###Output _____no_output_____ ###Markdown Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ###Code HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ###Output _____no_output_____ ###Markdown Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ###Code yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) #clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') #yellow_clip = clip2.fl_image(process_image) #%time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ###Output _____no_output_____ ###Markdown Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ###Code challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) #clip3 = VideoFileClip('test_videos/challenge.mp4') #challenge_clip = clip3.fl_image(process_image) #%time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ###Output _____no_output_____
2-BuildingTrainingNN/TFData/TFData.ipynb
###Markdown Cómo reducir el tiempo de entrenamiento de un modelo de aprendizaje profundo utilizando tf.data Importando las librerías necesarias ###Code import tensorflow as tf config = tf.compat.v1.ConfigProto() config.gpu_options.allow_growth = True sess = tf.compat.v1.Session(config=config) import numpy as np import pandas as pd import pathlib import os from os import getcwd import pandas as pd from glob import glob import multiprocessing from tensorflow.keras.applications.mobilenet_v2 import preprocess_input ###Output _____no_output_____
S11/EVA4P2_S11_annotated_encoder_decoder_deployment_v1a.ipynb
###Markdown ###Code !wget https://github.com/EVA4-RS-Group/Phase2/blob/master/S11/de_eng_translation_model.pt !wget https://github.com/EVA4-RS-Group/Phase2/blob/master/S11/SRC_vocab.pickle !wget https://github.com/EVA4-RS-Group/Phase2/blob/master/S11/SRC_vocab_stoi.pickle !wget https://github.com/EVA4-RS-Group/Phase2/blob/master/S11/TRG_vocab.pickle !wget https://github.com/EVA4-RS-Group/Phase2/blob/master/S11/TRG_vocab_stoi.pickle !pip install torch==1.5.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html #!pip install sacrebleu %matplotlib inline import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import math, copy, time from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence DEVICE=torch.device('cpu') class EncoderDecoder(nn.Module): """ A standard Encoder-Decoder architecture. Base for this and many other models. """ def __init__(self, encoder, decoder, src_embed, trg_embed, generator): super(EncoderDecoder, self).__init__() self.encoder = encoder self.decoder = decoder self.src_embed = src_embed self.trg_embed = trg_embed self.generator = generator def forward(self, src, trg, src_mask, trg_mask, src_lengths, trg_lengths): """Take in and process masked src and target sequences.""" encoder_hidden, encoder_final = self.encode(src, src_mask, src_lengths) return self.decode(encoder_hidden, encoder_final, src_mask, trg, trg_mask) def encode(self, src, src_mask, src_lengths): return self.encoder(self.src_embed(src), src_mask, src_lengths) def decode(self, encoder_hidden, encoder_final, src_mask, trg, trg_mask, decoder_hidden=None): return self.decoder(self.trg_embed(trg), encoder_hidden, encoder_final, src_mask, trg_mask, hidden=decoder_hidden) class Generator(nn.Module): """Define standard linear + softmax generation step.""" def __init__(self, hidden_size, vocab_size): super(Generator, self).__init__() self.proj = nn.Linear(hidden_size, vocab_size, bias=False) def forward(self, x): return F.log_softmax(self.proj(x), dim=-1) class Encoder(nn.Module): """Encodes a sequence of word embeddings""" def __init__(self, input_size, hidden_size, num_layers=1, dropout=0.): super(Encoder, self).__init__() self.num_layers = num_layers self.rnn = nn.GRU(input_size, hidden_size, num_layers, batch_first=True, bidirectional=True, dropout=dropout) def forward(self, x, mask, lengths): """ Applies a bidirectional GRU to sequence of embeddings x. The input mini-batch x needs to be sorted by length. x should have dimensions [batch, time, dim]. """ packed = pack_padded_sequence(x, lengths, batch_first=True) output, final = self.rnn(packed) output, _ = pad_packed_sequence(output, batch_first=True) # we need to manually concatenate the final states for both directions fwd_final = final[0:final.size(0):2] bwd_final = final[1:final.size(0):2] final = torch.cat([fwd_final, bwd_final], dim=2) # [num_layers, batch, 2*dim] return output, final class Decoder(nn.Module): """A conditional RNN decoder with attention.""" def __init__(self, emb_size, hidden_size, attention, num_layers=1, dropout=0.5, bridge=True): super(Decoder, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.attention = attention self.dropout = dropout self.rnn = nn.GRU(emb_size + 2*hidden_size, hidden_size, num_layers, batch_first=True, dropout=dropout) # to initialize from the final encoder state self.bridge = nn.Linear(2*hidden_size, hidden_size, bias=True) if bridge else None self.dropout_layer = nn.Dropout(p=dropout) self.pre_output_layer = nn.Linear(hidden_size + 2*hidden_size + emb_size, hidden_size, bias=False) def forward_step(self, prev_embed, encoder_hidden, src_mask, proj_key, hidden): """Perform a single decoder step (1 word)""" # compute context vector using attention mechanism query = hidden[-1].unsqueeze(1) # [#layers, B, D] -> [B, 1, D] context, attn_probs = self.attention( query=query, proj_key=proj_key, value=encoder_hidden, mask=src_mask) # update rnn hidden state rnn_input = torch.cat([prev_embed, context], dim=2) output, hidden = self.rnn(rnn_input, hidden) pre_output = torch.cat([prev_embed, output, context], dim=2) pre_output = self.dropout_layer(pre_output) pre_output = self.pre_output_layer(pre_output) return output, hidden, pre_output def forward(self, trg_embed, encoder_hidden, encoder_final, src_mask, trg_mask, hidden=None, max_len=None): """Unroll the decoder one step at a time.""" # the maximum number of steps to unroll the RNN if max_len is None: max_len = trg_mask.size(-1) # initialize decoder hidden state if hidden is None: hidden = self.init_hidden(encoder_final) # pre-compute projected encoder hidden states # (the "keys" for the attention mechanism) # this is only done for efficiency proj_key = self.attention.key_layer(encoder_hidden) # here we store all intermediate hidden states and pre-output vectors decoder_states = [] pre_output_vectors = [] # unroll the decoder RNN for max_len steps for i in range(max_len): prev_embed = trg_embed[:, i].unsqueeze(1) output, hidden, pre_output = self.forward_step( prev_embed, encoder_hidden, src_mask, proj_key, hidden) decoder_states.append(output) pre_output_vectors.append(pre_output) decoder_states = torch.cat(decoder_states, dim=1) pre_output_vectors = torch.cat(pre_output_vectors, dim=1) return decoder_states, hidden, pre_output_vectors # [B, N, D] def init_hidden(self, encoder_final): """Returns the initial decoder state, conditioned on the final encoder state.""" if encoder_final is None: return None # start with zeros return torch.tanh(self.bridge(encoder_final)) class BahdanauAttention(nn.Module): """Implements Bahdanau (MLP) attention""" def __init__(self, hidden_size, key_size=None, query_size=None): super(BahdanauAttention, self).__init__() # We assume a bi-directional encoder so key_size is 2*hidden_size key_size = 2 * hidden_size if key_size is None else key_size query_size = hidden_size if query_size is None else query_size self.key_layer = nn.Linear(key_size, hidden_size, bias=False) self.query_layer = nn.Linear(query_size, hidden_size, bias=False) self.energy_layer = nn.Linear(hidden_size, 1, bias=False) # to store attention scores self.alphas = None def forward(self, query=None, proj_key=None, value=None, mask=None): assert mask is not None, "mask is required" # We first project the query (the decoder state). # The projected keys (the encoder states) were already pre-computated. query = self.query_layer(query) # Calculate scores. scores = self.energy_layer(torch.tanh(query + proj_key)) scores = scores.squeeze(2).unsqueeze(1) # Mask out invalid positions. # The mask marks valid positions so we invert it using `mask & 0`. scores.data.masked_fill_(mask == 0, -float('inf')) # Turn scores to probabilities. alphas = F.softmax(scores, dim=-1) self.alphas = alphas # The context vector is the weighted sum of the values. context = torch.bmm(alphas, value) # context shape: [B, 1, 2D], alphas shape: [B, 1, M] return context, alphas def make_model(src_vocab, tgt_vocab, emb_size=256, hidden_size=512, num_layers=1, dropout=0.1): "Helper: Construct a model from hyperparameters." attention = BahdanauAttention(hidden_size) model = EncoderDecoder( Encoder(emb_size, hidden_size, num_layers=num_layers, dropout=dropout), Decoder(emb_size, hidden_size, attention, num_layers=num_layers, dropout=dropout), nn.Embedding(src_vocab, emb_size), nn.Embedding(tgt_vocab, emb_size), Generator(hidden_size, tgt_vocab)) return model import pickle with open('/content/SRC_vocab.pickle', 'rb') as handle: SRC_vocab = pickle.load(handle) with open('/content/TRG_vocab.pickle', 'rb') as handle: TRG_vocab = pickle.load(handle) with open('/content/SRC_vocab_stoi.pickle', 'rb') as handle2: SRC_vocab_stoi = pickle.load(handle2) with open('/content/TRG_vocab_stoi.pickle', 'rb') as handle3: TRG_vocab_stoi = pickle.load(handle3) model = make_model(len(SRC_vocab), len(TRG_vocab), emb_size=256, hidden_size=256, num_layers=1, dropout=0.2) model = model.to(DEVICE) model.load_state_dict(torch.load('/content/de_eng_translation_model.pt', map_location=DEVICE)) def greedy_decode(model, src, src_mask, src_lengths, max_len=100, sos_index=1, eos_index=None): """Greedily decode a sentence.""" with torch.no_grad(): encoder_hidden, encoder_final = model.encode(src, src_mask, src_lengths) prev_y = torch.ones(1, 1).fill_(sos_index).type_as(src) trg_mask = torch.ones_like(prev_y) output = [] attention_scores = [] hidden = None for i in range(max_len): with torch.no_grad(): out, hidden, pre_output = model.decode( encoder_hidden, encoder_final, src_mask, prev_y, trg_mask, hidden) # we predict from the pre-output layer, which is # a combination of Decoder state, prev emb, and context prob = model.generator(pre_output[:, -1]) _, next_word = torch.max(prob, dim=1) next_word = next_word.data.item() output.append(next_word) prev_y = torch.ones(1, 1).type_as(src).fill_(next_word) attention_scores.append(model.decoder.attention.alphas.cpu().numpy()) output = np.array(output) # cut off everything starting from </s> # (only when eos_index provided) if eos_index is not None: first_eos = np.where(output==eos_index)[0] if len(first_eos) > 0: output = output[:first_eos[0]] return output, np.concatenate(attention_scores, axis=1) def lookup_words(x, vocab=None): if vocab is not None: x = [vocab.itos[i] for i in x] return [str(t) for t in x] def tokenize(string): lst = ["?","!",".","'s","'t", ",", "'nt"] for i in lst: string = string.replace(i," "+i) return string.split(" ") def translate(src): tokenized = tokenize(src)#[tok.text for tok in spacy_de.tokenizer(src)] tokenized.append("</s>") # print(tokenized) indexed = [[SRC_vocab_stoi[t] for t in tokenized]] srcpr = [lookup_words(x, SRC_vocab) for x in indexed] # print("German : ",[" ".join(y) for y in srcpr][0]) srcs = torch.LongTensor(indexed).to(DEVICE) length = torch.LongTensor([len(indexed[0])]).to(DEVICE) mask = (srcs != 0).unsqueeze(-2).to(DEVICE) # print(srcs) # print(mask) # print(length) pred, attention = greedy_decode( model, srcs, mask, length, max_len=25, sos_index=TRG_vocab_stoi[SOS_TOKEN], eos_index=TRG_vocab_stoi[EOS_TOKEN]) # print(pred) english = lookup_words(pred, TRG_vocab) return " ".join(english) #src = input("Enter the german text: ") src ="mein vater hörte sich auf seinem kleinen ," print(translate(src)) Src : mein vater hörte sich auf seinem kleinen , grauen radio die <unk> der bbc an . Trg : my father was listening to bbc news on his small , gray radio . Pred: my father stopped on the little bit of radio , radio radio <unk> the bbc . ###Output _____no_output_____
docs/source/example_notebooks/fonts.ipynb
###Markdown Choosing the correct fonts ###Code import matplotlib.pyplot as plt from tueplots import figsizes, fonts # Increase the resolution of all the plots below plt.rcParams.update({"figure.dpi": 150}) # "Better" figure size to display the font-changes plt.rcParams.update(figsizes.icml2022_half()) ###Output _____no_output_____ ###Markdown Fonts in `tueplots` follow the same interface as the other settings.There are some pre-defined font recipes for a few journals, and they return dictionaries that are compatible with `matplotlib.pyplot.rcParams.update()`. ###Code fonts.neurips2021() ###Output _____no_output_____ ###Markdown Compare the following default font to some of the alternatives that we provide: ###Code fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.jmlr2001_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.jmlr2001_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.beamer_moml()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022()): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022_tex(family="sans-serif")): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022_tex(family="serif")): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.aistats2022_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.aistats2022_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() ###Output _____no_output_____ ###Markdown Choosing the correct fonts ###Code from tueplots import fonts, figsizes import matplotlib.pyplot as plt # Increase the resolution of all the plots below plt.rcParams.update({"figure.dpi": 150}) # "Better" figure size to display the font-changes plt.rcParams.update(figsizes.icml2022_half()) ###Output _____no_output_____ ###Markdown Fonts in `tueplots` follow the same interface as the other settings.There are some pre-defined font recipes for a few journals, and they return dictionaries that are compatible with `matplotlib.pyplot.rcParams.update()`. ###Code fonts.neurips2021() ###Output _____no_output_____ ###Markdown Compare the following default font to some of the alternatives that we provide: ###Code fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.jmlr2001_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.jmlr2001_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.neurips2021_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.beamer_moml()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022()): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022_tex(family="sans-serif")): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() with plt.rc_context(fonts.icml2022_tex(family="serif")): fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.aistats2022_tex(family="serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() plt.rcParams.update(fonts.aistats2022_tex(family="sans-serif")) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel $\int_a^b f(x) dx$") ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$") plt.show() ###Output _____no_output_____
end_to_end/4-deploy-run-inference-e2e.ipynb
###Markdown Part 4 : Deploy, Run Inference, Interpret Inference [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)* [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* **[Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)** * **[Architecture](deploy)** * **[Deploy an approved model and Run Inference via Feature Store](deploy-model)** * **[Create a Predictor](predictor)** * **[Run Predictions from Online FeatureStore](run-predictions)*** [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section of the end to end use case, we will deploy the mitigated model that is the end-product of this fraud detection use-case. We will show how to run inference and also how to use Clarify to interpret or "explain" the model. Install required and/or update third-party libraries ###Code !python -m pip install -Uq pip !python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.41.0 boto3==1.17.70 ###Output _____no_output_____ ###Markdown Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook. ###Code %store -r %store ###Output _____no_output_____ ###Markdown **Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries ###Code import json import time import boto3 import sagemaker import numpy as np import pandas as pd import awswrangler as wr ###Output _____no_output_____ ###Markdown Set region, boto3 and SageMaker SDK variables ###Code # You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client("s3", region_name=region) sagemaker_boto_client = boto_session.client("sagemaker") sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client ) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client("sts").get_caller_identity()["Account"] # variables used for parameterizing the notebook run endpoint_name = f"{model_2_name}-endpoint" endpoint_instance_count = 1 endpoint_instance_type = "ml.m4.xlarge" predictor_instance_count = 1 predictor_instance_type = "ml.c5.xlarge" batch_transform_instance_count = 1 batch_transform_instance_type = "ml.c5.xlarge" ###Output _____no_output_____ ###Markdown Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview-4)![train-assess-tune-register](./images/e2e-3-pipeline-v3b.png) Deploy an approved model and make prediction via Feature Store[overview](overview-4) Approve the second modelIn the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts and auditors. ###Code second_model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)[ "ModelPackageSummaryList" ][0] model_package_update = { "ModelPackageArn": second_model_package["ModelPackageArn"], "ModelApprovalStatus": "Approved", } update_response = sagemaker_boto_client.update_model_package(**model_package_update) ###Output _____no_output_____ ###Markdown Create an endpoint config and an endpointDeploy the endpoint. This might take about 8minutes. ###Code primary_container = {'ModelPackageName': second_model_package['ModelPackageArn']} endpoint_config_name=f'{model_2_name}-endpoint-config' existing_configs = len(sagemaker_boto_client.list_endpoint_configs(NameContains=endpoint_config_name, MaxResults = 30)['EndpointConfigs']) if existing_configs == 0: create_ep_config_response = sagemaker_boto_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ 'InstanceType': endpoint_instance_type, 'InitialVariantWeight': 1, 'InitialInstanceCount': endpoint_instance_count, 'ModelName': model_2_name, 'VariantName': 'AllTraffic' }] ) %store endpoint_config_name existing_endpoints = sagemaker_boto_client.list_endpoints(NameContains=endpoint_name, MaxResults = 30)['Endpoints'] if not existing_endpoints: create_endpoint_response = sagemaker_boto_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) %store endpoint_name endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] while endpoint_status == 'Creating': endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] print('Endpoint status:', endpoint_status) if endpoint_status == 'Creating': time.sleep(60) ###Output _____no_output_____ ###Markdown Create a predictor ###Code predictor = sagemaker.predictor.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session ) ###Output _____no_output_____ ###Markdown Sample a claim from the test data ###Code dataset = pd.read_csv("data/dataset.csv") train = dataset.sample(frac=0.8, random_state=0) test = dataset.drop(train.index) sample_policy_id = int(test.sample(1)["policy_id"]) test.info() ###Output _____no_output_____ ###Markdown Get sample's claim data from online feature storeThis will simulate getting data in real-time from a customer's insurance claim submission. ###Code featurestore_runtime = boto_session.client( service_name="sagemaker-featurestore-runtime", region_name=region ) feature_store_session = sagemaker.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client, sagemaker_featurestore_runtime_client=featurestore_runtime, ) ###Output _____no_output_____ ###Markdown Run Predictions on Multiple Claims[overview](overview-4) ###Code import datetime as datetime timer = [] MAXRECS = 100 def barrage_of_inference(): sample_policy_id = int(test.sample(1)["policy_id"]) temp_fg_name = "fraud-detect-demo-claims" claims_response = featurestore_runtime.get_record( FeatureGroupName=temp_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) if claims_response.get("Record"): claims_record = claims_response["Record"] claims_df = pd.DataFrame(claims_record).set_index("FeatureName") else: print("No Record returned / Record Key \n") t0 = datetime.datetime.now() customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) t1 = datetime.datetime.now() customer_record = customers_response["Record"] customer_df = pd.DataFrame(customer_record).set_index("FeatureName") blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop("fraud") data_input = ",".join(blended_df["ValueAsString"]) results = predictor.predict(data_input, initial_args={"ContentType": "text/csv"}) prediction = json.loads(results) # print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) arr = t1 - t0 minutes, seconds = divmod(arr.total_seconds(), 60) timer.append(seconds) # print (prediction, " done in {} ".format(seconds)) return sample_policy_id, prediction for i in range(MAXRECS): sample_policy_id, prediction = barrage_of_inference() print(f"Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:", prediction) timer ###Output _____no_output_____ ###Markdown Note: the above "timer" records the first call and then subsequent calls to the online Feature Store ###Code import statistics import numpy as np statistics.mean(timer) arr = np.array(timer) print( "p95: {}, p99: {}, mean: {} for {} distinct feature store gets".format( np.percentile(arr, 95), np.percentile(arr, 99), np.mean(arr), MAXRECS ) ) ###Output _____no_output_____ ###Markdown Pull customer data from Customers feature groupWhen a customer submits an insurance claim online for instant approval, the insurance company will need to pull customer-specific data from the online feature store to add to the claim data as input for a model prediction. ###Code customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) customer_record = customers_response["Record"] customer_df = pd.DataFrame(customer_record).set_index("FeatureName") claims_response = featurestore_runtime.get_record( FeatureGroupName=claims_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) claims_record = claims_response["Record"] claims_df = pd.DataFrame(claims_record).set_index("FeatureName") ###Output _____no_output_____ ###Markdown Format the datapointThe datapoint must match the exact input format as the model was trained--with all features in the correct order. In this example, the `col_order` variable was saved when you created the train and test datasets earlier in the guide. ###Code blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop("fraud") data_input = ",".join(blended_df["ValueAsString"]) ###Output _____no_output_____ ###Markdown Make prediction ###Code results = predictor.predict(data_input, initial_args={"ContentType": "text/csv"}) prediction = json.loads(results) print(f"Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:", prediction) ###Output _____no_output_____ ###Markdown Part 4 : Deploy, Run Inference, Interpret Inference [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)* [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* **[Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)** * **[Architecture](deploy)** * **[Deploy an approved model and Run Inference via Feature Store](deploy-model)** * **[Create a Predictor](predictor)** * **[Run Predictions from Online FeatureStore](run-predictions)*** [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section of the end to end use case, we will deploy the mitigated model that is the end-product of this fraud detection use-case. We will show how to run inference and also how to use Clarify to interpret or "explain" the model. Install required and/or update third-party libraries ###Code !python -m pip install -Uq pip !python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48 ###Output _____no_output_____ ###Markdown Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook. ###Code %store -r %store ###Output _____no_output_____ ###Markdown **Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries ###Code import json import time import boto3 import sagemaker import numpy as np import pandas as pd import awswrangler as wr ###Output _____no_output_____ ###Markdown Set region, boto3 and SageMaker SDK variables ###Code #You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client('s3', region_name=region) sagemaker_boto_client = boto_session.client('sagemaker') sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client('sts').get_caller_identity()["Account"] # variables used for parameterizing the notebook run endpoint_name = f'{model_2_name}-endpoint' endpoint_instance_count = 1 endpoint_instance_type = "ml.m4.xlarge" predictor_instance_count = 1 predictor_instance_type = "ml.c5.xlarge" batch_transform_instance_count = 1 batch_transform_instance_type = "ml.c5.xlarge" ###Output _____no_output_____ ###Markdown Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview-4)![train-assess-tune-register](./images/e2e-3-pipeline-v3b.png) Deploy an approved model and make prediction via Feature Store[overview](overview-4) Approve the second modelIn the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts and auditors. ###Code second_model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList'][0] model_package_update = { 'ModelPackageArn': second_model_package['ModelPackageArn'], 'ModelApprovalStatus': 'Approved' } update_response = sagemaker_boto_client.update_model_package(**model_package_update) ###Output _____no_output_____ ###Markdown Create an endpoint config and an endpointDeploy the endpoint. This might take about 8minutes. ###Code primary_container = {'ModelPackageName': second_model_package['ModelPackageArn']} endpoint_config_name=f'{model_2_name}-endpoint-config' existing_configs = len(sagemaker_boto_client.list_endpoint_configs(NameContains=endpoint_config_name, MaxResults = 30)['EndpointConfigs']) if existing_configs == 0: create_ep_config_response = sagemaker_boto_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ 'InstanceType': endpoint_instance_type, 'InitialVariantWeight': 1, 'InitialInstanceCount': endpoint_instance_count, 'ModelName': model_2_name, 'VariantName': 'AllTraffic' }] ) %store endpoint_config_name existing_endpoints = sagemaker_boto_client.list_endpoints(NameContains=endpoint_name, MaxResults = 30)['Endpoints'] if not existing_endpoints: create_endpoint_response = sagemaker_boto_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) %store endpoint_name endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] while endpoint_status == 'Creating': endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] print('Endpoint status:', endpoint_status) if endpoint_status == 'Creating': time.sleep(60) ###Output _____no_output_____ ###Markdown Create a predictor ###Code predictor = sagemaker.predictor.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session) ###Output _____no_output_____ ###Markdown Sample a claim from the test data ###Code dataset = pd.read_csv('data/dataset.csv') train = dataset.sample(frac=0.8, random_state=0) test = dataset.drop(train.index) sample_policy_id = int(test.sample(1)['policy_id']) test.info() ###Output _____no_output_____ ###Markdown Get sample's claim data from online feature storeThis will simulate getting data in real-time from a customer's insurance claim submission. ###Code featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region) feature_store_session = sagemaker.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client, sagemaker_featurestore_runtime_client=featurestore_runtime ) ###Output _____no_output_____ ###Markdown Run Predictions on Multiple Claims[overview](overview-4) ###Code import datetime as datetime timer =[] MAXRECS = 100 def barrage_of_inference(): sample_policy_id = int(test.sample(1)['policy_id']) temp_fg_name = 'fraud-detect-demo-claims' claims_response = featurestore_runtime.get_record( FeatureGroupName=temp_fg_name, RecordIdentifierValueAsString= str(sample_policy_id) ) if (claims_response.get('Record')): claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') else: print ("No Record returned / Record Key \n") t0 = datetime.datetime.now() customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) t1 = datetime.datetime.now() customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) #print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) arr = t1-t0 minutes, seconds = divmod(arr.total_seconds(), 60) timer.append(seconds) #print (prediction, " done in {} ".format(seconds)) return sample_policy_id, prediction for i in range(MAXRECS): sample_policy_id, prediction = barrage_of_inference() print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) timer ###Output _____no_output_____ ###Markdown Note: the above "timer" records the first call and then subsequent calls to the online Feature Store ###Code import statistics import numpy as np statistics.mean(timer) arr = np.array(timer) print("p95: {}, p99: {}, mean: {} for {} distinct feature store gets".format(np.percentile(arr,95),np.percentile(arr,99),np.mean(arr), MAXRECS)) ###Output _____no_output_____ ###Markdown Pull customer data from Customers feature groupWhen a customer submits an insurance claim online for instant approval, the insurance company will need to pull customer-specific data from the online feature store to add to the claim data as input for a model prediction. ###Code customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') claims_response = featurestore_runtime.get_record( FeatureGroupName=claims_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') ###Output _____no_output_____ ###Markdown Format the datapointThe datapoint must match the exact input format as the model was trained--with all features in the correct order. In this example, the `col_order` variable was saved when you created the train and test datasets earlier in the guide. ###Code blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) ###Output _____no_output_____ ###Markdown Make prediction ###Code results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) ###Output _____no_output_____ ###Markdown Part 4 : Deploy, Run Inference, Interpret Inference [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)* [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* **[Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)** * **[Architecture](deploy)** * **[Deploy an approved model and Run Inference via Feature Store](deploy-model)** * **[Create a Predictor](predictor)** * **[Run Predictions from Online FeatureStore](run-predictions)*** [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section of the end to end use case, we will deploy the mitigated model that is the end-product of this fraud detection use-case. We will show how to run inference and also how to use Clarify to interpret or "explain" the model. Install required and/or update third-party libraries ###Code !python -m pip install -Uq pip !python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48 ###Output _____no_output_____ ###Markdown Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook. ###Code %store -r %store ###Output _____no_output_____ ###Markdown **Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries ###Code import json import time import boto3 import sagemaker import numpy as np import pandas as pd import awswrangler as wr ###Output _____no_output_____ ###Markdown Set region, boto3 and SageMaker SDK variables ###Code # You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client("s3", region_name=region) sagemaker_boto_client = boto_session.client("sagemaker") sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client ) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client("sts").get_caller_identity()["Account"] # variables used for parameterizing the notebook run endpoint_name = f"{model_2_name}-endpoint" endpoint_instance_count = 1 endpoint_instance_type = "ml.m4.xlarge" predictor_instance_count = 1 predictor_instance_type = "ml.c5.xlarge" batch_transform_instance_count = 1 batch_transform_instance_type = "ml.c5.xlarge" ###Output _____no_output_____ ###Markdown Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview-4)![train-assess-tune-register](./images/e2e-3-pipeline-v3b.png) Deploy an approved model and make prediction via Feature Store[overview](overview-4) Approve the second modelIn the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts and auditors. ###Code second_model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)[ "ModelPackageSummaryList" ][0] model_package_update = { "ModelPackageArn": second_model_package["ModelPackageArn"], "ModelApprovalStatus": "Approved", } update_response = sagemaker_boto_client.update_model_package(**model_package_update) ###Output _____no_output_____ ###Markdown Create an endpoint config and an endpointDeploy the endpoint. This might take about 8minutes. ###Code primary_container = {'ModelPackageName': second_model_package['ModelPackageArn']} endpoint_config_name=f'{model_2_name}-endpoint-config' existing_configs = len(sagemaker_boto_client.list_endpoint_configs(NameContains=endpoint_config_name, MaxResults = 30)['EndpointConfigs']) if existing_configs == 0: create_ep_config_response = sagemaker_boto_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ 'InstanceType': endpoint_instance_type, 'InitialVariantWeight': 1, 'InitialInstanceCount': endpoint_instance_count, 'ModelName': model_2_name, 'VariantName': 'AllTraffic' }] ) %store endpoint_config_name existing_endpoints = sagemaker_boto_client.list_endpoints(NameContains=endpoint_name, MaxResults = 30)['Endpoints'] if not existing_endpoints: create_endpoint_response = sagemaker_boto_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) %store endpoint_name endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] while endpoint_status == 'Creating': endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] print('Endpoint status:', endpoint_status) if endpoint_status == 'Creating': time.sleep(60) ###Output _____no_output_____ ###Markdown Create a predictor ###Code predictor = sagemaker.predictor.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session ) ###Output _____no_output_____ ###Markdown Sample a claim from the test data ###Code dataset = pd.read_csv("data/dataset.csv") train = dataset.sample(frac=0.8, random_state=0) test = dataset.drop(train.index) sample_policy_id = int(test.sample(1)["policy_id"]) test.info() ###Output _____no_output_____ ###Markdown Get sample's claim data from online feature storeThis will simulate getting data in real-time from a customer's insurance claim submission. ###Code featurestore_runtime = boto_session.client( service_name="sagemaker-featurestore-runtime", region_name=region ) feature_store_session = sagemaker.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client, sagemaker_featurestore_runtime_client=featurestore_runtime, ) ###Output _____no_output_____ ###Markdown Run Predictions on Multiple Claims[overview](overview-4) ###Code import datetime as datetime timer = [] MAXRECS = 100 def barrage_of_inference(): sample_policy_id = int(test.sample(1)["policy_id"]) temp_fg_name = "fraud-detect-demo-claims" claims_response = featurestore_runtime.get_record( FeatureGroupName=temp_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) if claims_response.get("Record"): claims_record = claims_response["Record"] claims_df = pd.DataFrame(claims_record).set_index("FeatureName") else: print("No Record returned / Record Key \n") t0 = datetime.datetime.now() customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) t1 = datetime.datetime.now() customer_record = customers_response["Record"] customer_df = pd.DataFrame(customer_record).set_index("FeatureName") blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop("fraud") data_input = ",".join(blended_df["ValueAsString"]) results = predictor.predict(data_input, initial_args={"ContentType": "text/csv"}) prediction = json.loads(results) # print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) arr = t1 - t0 minutes, seconds = divmod(arr.total_seconds(), 60) timer.append(seconds) # print (prediction, " done in {} ".format(seconds)) return sample_policy_id, prediction for i in range(MAXRECS): sample_policy_id, prediction = barrage_of_inference() print(f"Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:", prediction) timer ###Output _____no_output_____ ###Markdown Note: the above "timer" records the first call and then subsequent calls to the online Feature Store ###Code import statistics import numpy as np statistics.mean(timer) arr = np.array(timer) print( "p95: {}, p99: {}, mean: {} for {} distinct feature store gets".format( np.percentile(arr, 95), np.percentile(arr, 99), np.mean(arr), MAXRECS ) ) ###Output _____no_output_____ ###Markdown Pull customer data from Customers feature groupWhen a customer submits an insurance claim online for instant approval, the insurance company will need to pull customer-specific data from the online feature store to add to the claim data as input for a model prediction. ###Code customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) customer_record = customers_response["Record"] customer_df = pd.DataFrame(customer_record).set_index("FeatureName") claims_response = featurestore_runtime.get_record( FeatureGroupName=claims_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) claims_record = claims_response["Record"] claims_df = pd.DataFrame(claims_record).set_index("FeatureName") ###Output _____no_output_____ ###Markdown Format the datapointThe datapoint must match the exact input format as the model was trained--with all features in the correct order. In this example, the `col_order` variable was saved when you created the train and test datasets earlier in the guide. ###Code blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop("fraud") data_input = ",".join(blended_df["ValueAsString"]) ###Output _____no_output_____ ###Markdown Make prediction ###Code results = predictor.predict(data_input, initial_args={"ContentType": "text/csv"}) prediction = json.loads(results) print(f"Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:", prediction) ###Output _____no_output_____ ###Markdown Part 4 : Deploy, Run Inference, Interpret Inference [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)* [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* **[Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)** * **[Architecture](deploy)** * **[Deploy an approved model and Run Inference via Feature Store](deploy-model)** * **[Create a Predictor](predictor)** * **[Run Predictions from Online FeatureStore](run-predictions)*** [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section of the end to end use case, we will deploy the mitigated model that is the end-product of this fraud detection use-case. We will show how to run inference and also how to use Clarify to interpret or "explain" the model. Install required and/or update third-party libraries ###Code !python -m pip install -Uq pip !python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48 ###Output _____no_output_____ ###Markdown Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook. ###Code %store -r %store ###Output _____no_output_____ ###Markdown **Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries ###Code import json import time import boto3 import sagemaker import numpy as np import pandas as pd import awswrangler as wr ###Output _____no_output_____ ###Markdown Set region, boto3 and SageMaker SDK variables ###Code #You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client('s3', region_name=region) sagemaker_boto_client = boto_session.client('sagemaker') sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client('sts').get_caller_identity()["Account"] # variables used for parameterizing the notebook run endpoint_name = f'{model_2_name}-endpoint' endpoint_instance_count = 1 endpoint_instance_type = "ml.m4.xlarge" predictor_instance_count = 1 predictor_instance_type = "ml.c5.xlarge" batch_transform_instance_count = 1 batch_transform_instance_type = "ml.c5.xlarge" ###Output _____no_output_____ ###Markdown Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview-4)![train-assess-tune-register](./images/e2e-3-pipeline-v3b.png) Deploy an approved model and make prediction via Feature Store[overview](overview-4) Approve the second modelIn the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts and auditors. ###Code second_model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList'][0] model_package_update = { 'ModelPackageArn': second_model_package['ModelPackageArn'], 'ModelApprovalStatus': 'Approved' } update_response = sagemaker_boto_client.update_model_package(**model_package_update) ###Output _____no_output_____ ###Markdown Create an endpoint config and an endpointDeploy the endpoint. This might take about 8minutes. ###Code primary_container = {'ModelPackageName': second_model_package['ModelPackageArn']} endpoint_config_name=f'{model_2_name}-endpoint-config' existing_configs = len(sagemaker_boto_client.list_endpoint_configs(NameContains=endpoint_config_name, MaxResults = 30)['EndpointConfigs']) if existing_configs == 0: create_ep_config_response = sagemaker_boto_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ 'InstanceType': endpoint_instance_type, 'InitialVariantWeight': 1, 'InitialInstanceCount': endpoint_instance_count, 'ModelName': model_2_name, 'VariantName': 'AllTraffic' }] ) %store endpoint_config_name existing_endpoints = sagemaker_boto_client.list_endpoints(NameContains=endpoint_name, MaxResults = 30)['Endpoints'] if not existing_endpoints: create_endpoint_response = sagemaker_boto_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) %store endpoint_name endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] while endpoint_status == 'Creating': endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] print('Endpoint status:', endpoint_status) if endpoint_status == 'Creating': time.sleep(60) ###Output _____no_output_____ ###Markdown Create a predictor ###Code predictor = sagemaker.predictor.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session) ###Output _____no_output_____ ###Markdown Sample a claim from the test data ###Code dataset = pd.read_csv('data/dataset.csv') train = dataset.sample(frac=0.8, random_state=0) test = dataset.drop(train.index) sample_policy_id = int(test.sample(1)['policy_id']) test.info() ###Output _____no_output_____ ###Markdown Get sample's claim data from online feature storeThis will simulate getting data in real-time from a customer's insurance claim submission. ###Code featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region) feature_store_session = sagemaker.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client, sagemaker_featurestore_runtime_client=featurestore_runtime ) ###Output _____no_output_____ ###Markdown Run Predictions on Multiple Claims[overview](overview-4) ###Code import datetime as datetime timer =[] MAXRECS = 100 def barrage_of_inference(): sample_policy_id = int(test.sample(1)['policy_id']) temp_fg_name = 'fraud-detect-demo-claims' claims_response = featurestore_runtime.get_record( FeatureGroupName=temp_fg_name, RecordIdentifierValueAsString= str(sample_policy_id) ) if (claims_response.get('Record')): claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') else: print ("No Record returned / Record Key \n") t0 = datetime.datetime.now() customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) t1 = datetime.datetime.now() customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) #print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) arr = t1-t0 minutes, seconds = divmod(arr.total_seconds(), 60) timer.append(seconds) #print (prediction, " done in {} ".format(seconds)) return sample_policy_id, prediction for i in range(MAXRECS): sample_policy_id, prediction = barrage_of_inference() print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) timer ###Output _____no_output_____ ###Markdown Note: the above "timer" records the first call and then subsequent calls to the online Feature Store ###Code import statistics import numpy as np statistics.mean(timer) arr = np.array(timer) print("p95: {}, p99: {}, mean: {} for {} distinct feature store gets".format(np.percentile(arr,95),np.percentile(arr,99),np.mean(arr), MAXRECS)) ###Output _____no_output_____ ###Markdown Pull customer data from Customers feature groupWhen a customer submits an insurance claim online for instant approval, the insurance company will need to pull customer-specific data from the online feature store to add to the claim data as input for a model prediction. ###Code customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') claims_response = featurestore_runtime.get_record( FeatureGroupName=claims_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') ###Output _____no_output_____ ###Markdown Format the datapointThe datapoint must match the exact input format as the model was trained--with all features in the correct order. In this example, the `col_order` variable was saved when you created the train and test datasets earlier in the guide. ###Code blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) ###Output _____no_output_____ ###Markdown Make prediction ###Code results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) ###Output _____no_output_____ ###Markdown Part 4 : Deploy, Run Inference, Interpret Inference [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)* [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* **[Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)** * **[Architecture](deploy)** * **[Deploy an approved model and Run Inference via Feature Store](deploy-model)** * **[Create a Predictor](predictor)** * **[Run Predictions from Online FeatureStore](run-predictions)*** [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section of the end to end use case, we will deploy the mitigated model that is the end-product of this fraud detection use-case. We will show how to run inference and also how to use Clarify to interpret or "explain" the model. Install required and/or update third-party libraries ###Code !python -m pip install -Uq pip !python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48 ###Output _____no_output_____ ###Markdown Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook. ###Code %store -r %store ###Output _____no_output_____ ###Markdown **Important: You must have run the previous sequancial notebooks to retrieve variables using the StoreMagic command.** Import libraries ###Code import json import time import boto3 import sagemaker import numpy as np import pandas as pd import awswrangler as wr ###Output _____no_output_____ ###Markdown Set region, boto3 and SageMaker SDK variables ###Code #You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client('s3', region_name=region) sagemaker_boto_client = boto_session.client('sagemaker') sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client('sts').get_caller_identity()["Account"] # variables used for parameterizing the notebook run endpoint_name = f'{model_2_name}-endpoint' endpoint_instance_count = 1 endpoint_instance_type = "ml.m4.xlarge" predictor_instance_count = 1 predictor_instance_type = "ml.c5.xlarge" batch_transform_instance_count = 1 batch_transform_instance_type = "ml.c5.xlarge" ###Output _____no_output_____ ###Markdown Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview-4)![train-assess-tune-register](./images/e2e-3-pipeline-v3b.png) Deploy an approved model and make prediction via Feature Store[overview](overview-4) Approve the second modelIn the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts and auditors. ###Code second_model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList'][0] model_package_update = { 'ModelPackageArn': second_model_package['ModelPackageArn'], 'ModelApprovalStatus': 'Approved' } update_response = sagemaker_boto_client.update_model_package(**model_package_update) ###Output _____no_output_____ ###Markdown Create an endpoint config and an endpointDeploy the endpoint. This might take about 8minutes. ###Code primary_container = {'ModelPackageName': second_model_package['ModelPackageArn']} endpoint_config_name=f'{model_2_name}-endpoint-config' existing_configs = len(sagemaker_boto_client.list_endpoint_configs(NameContains=endpoint_config_name, MaxResults = 30)['EndpointConfigs']) if existing_configs == 0: create_ep_config_response = sagemaker_boto_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ 'InstanceType': endpoint_instance_type, 'InitialVariantWeight': 1, 'InitialInstanceCount': endpoint_instance_count, 'ModelName': model_2_name, 'VariantName': 'AllTraffic' }] ) %store endpoint_config_name existing_endpoints = sagemaker_boto_client.list_endpoints(NameContains=endpoint_name, MaxResults = 30)['Endpoints'] if not existing_endpoints: create_endpoint_response = sagemaker_boto_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) %store endpoint_name endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] while endpoint_status == 'Creating': endpoint_info = sagemaker_boto_client.describe_endpoint(EndpointName=endpoint_name) endpoint_status = endpoint_info['EndpointStatus'] print('Endpoint status:', endpoint_status) if endpoint_status == 'Creating': time.sleep(60) ###Output _____no_output_____ ###Markdown Create a predictor ###Code predictor = sagemaker.predictor.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session) ###Output _____no_output_____ ###Markdown Sample a claim from the test data ###Code dataset = pd.read_csv('data/dataset.csv') train = dataset.sample(frac=0.8, random_state=0) test = dataset.drop(train.index) sample_policy_id = int(test.sample(1)['policy_id']) test.info() ###Output _____no_output_____ ###Markdown Get sample's claim data from online feature storeThis will simulate getting data in real-time from a customer's insurance claim submission. ###Code featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region) feature_store_session = sagemaker.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client, sagemaker_featurestore_runtime_client=featurestore_runtime ) ###Output _____no_output_____ ###Markdown Run Predictions on Multiple Claims[overview](overview-4) ###Code import datetime as datetime timer =[] MAXRECS = 100 def barrage_of_inference(): sample_policy_id = int(test.sample(1)['policy_id']) temp_fg_name = 'fraud-detect-demo-claims' claims_response = featurestore_runtime.get_record( FeatureGroupName=temp_fg_name, RecordIdentifierValueAsString= str(sample_policy_id) ) if (claims_response.get('Record')): claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') else: print ("No Record returned / Record Key \n") t0 = datetime.datetime.now() customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id) ) t1 = datetime.datetime.now() customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) #print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) arr = t1-t0 minutes, seconds = divmod(arr.total_seconds(), 60) timer.append(seconds) #print (prediction, " done in {} ".format(seconds)) return sample_policy_id, prediction for i in range(MAXRECS): sample_policy_id, prediction = barrage_of_inference() print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) timer ###Output _____no_output_____ ###Markdown Note: the above "timer" records the first call and then subsequent calls to the online Feature Store ###Code import statistics import numpy as np statistics.mean(timer) arr = np.array(timer) print("p95: {}, p99: {}, mean: {} for {} distinct feature store gets".format(np.percentile(arr,95),np.percentile(arr,99),np.mean(arr), MAXRECS)) ###Output _____no_output_____ ###Markdown Pull customer data from Customers feature groupWhen a customer submits an insurance claim online for instant approval, the insurance company will need to pull customer-specific data from the online feature store to add to the claim data as input for a model prediction. ###Code customers_response = featurestore_runtime.get_record( FeatureGroupName=customers_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) customer_record = customers_response['Record'] customer_df = pd.DataFrame(customer_record).set_index('FeatureName') claims_response = featurestore_runtime.get_record( FeatureGroupName=claims_fg_name, RecordIdentifierValueAsString=str(sample_policy_id)) claims_record = claims_response['Record'] claims_df = pd.DataFrame(claims_record).set_index('FeatureName') ###Output _____no_output_____ ###Markdown Format the datapointThe datapoint must match the exact input format as the model was trained--with all features in the correct order. In this example, the `col_order` variable was saved when you created the train and test datasets earlier in the guide. ###Code blended_df = pd.concat([claims_df, customer_df]).loc[col_order].drop('fraud') data_input = ','.join(blended_df['ValueAsString']) ###Output _____no_output_____ ###Markdown Make prediction ###Code results = predictor.predict(data_input, initial_args = {"ContentType": "text/csv"}) prediction = json.loads(results) print (f'Probablitity the claim from policy {int(sample_policy_id)} is fraudulent:', prediction) ###Output _____no_output_____
code/4_basic_3d_GAN/2_analysis/3_inspect_images.ipynb
###Markdown Inspect imagesJan 27, 2021 ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import subprocess as sp import sys import os import glob from matplotlib.colors import LogNorm, PowerNorm, Normalize import seaborn as sns import itertools %matplotlib widget ### Transformation functions for image pixel values def f_transform(x): return 2.*x/(x + 4.) - 1. def f_invtransform(s): return 4.*(1. + s)/(1. - s) ## Grid plot for 2D images def f_plot_grid(arr,cols=16,fig_size=(15,5)): ''' Plot a grid of images ''' size=arr.shape[0] rows=int(np.ceil(size/cols)) print(rows,cols) fig,axarr=plt.subplots(rows,cols,figsize=fig_size, gridspec_kw = {'wspace':0, 'hspace':0}) if rows==1: axarr=np.reshape(axarr,(rows,cols)) if cols==1: axarr=np.reshape(axarr,(rows,cols)) for i in range(min(rows*cols,size)): row,col=int(i/cols),i%cols try: axarr[row,col].imshow(arr[i],origin='lower', cmap='YlGn', extent = [0, 128, 0, 128], norm=Normalize(vmin=-1., vmax=1.)) # Drop axis label except Exception as e: print('Exception:',e) pass temp=plt.setp([a.get_xticklabels() for a in axarr[:-1,:].flatten()], visible=False) temp=plt.setp([a.get_yticklabels() for a in axarr[:,1:].flatten()], visible=False) ## Pixel intensity functions def f_batch_histogram(img_arr,bins,norm,hist_range): ''' Compute histogram statistics for a batch of images''' ## Extracting the range. This is important to ensure that the different histograms are compared correctly if hist_range==None : ulim,llim=np.max(img_arr),np.min(img_arr) else: ulim,llim=hist_range[1],hist_range[0] # print(ulim,llim) ### array of histogram of each image hist_arr=np.array([np.histogram(arr.flatten(), bins=bins, range=(llim,ulim), density=norm) for arr in img_arr],dtype=object) ## range is important hist=np.stack(hist_arr[:,0]) # First element is histogram array # print(hist.shape) bin_list=np.stack(hist_arr[:,1]) # Second element is bin value ### Compute statistics over histograms of individual images mean,err=np.mean(hist,axis=0),np.std(hist,axis=0)/np.sqrt(hist.shape[0]) bin_edges=bin_list[0] centers = (bin_edges[:-1] + bin_edges[1:]) / 2 return mean,err,centers def f_pixel_intensity(img_arr,bins=25,label='validation',mode='avg',normalize=False,log_scale=True,plot=True, hist_range=None): ''' Module to compute and plot histogram for pixel intensity of images Has 2 modes : simple and avg simple mode: No errors. Just flatten the input image array and compute histogram of full data avg mode(Default) : - Compute histogram for each image in the image array - Compute errors across each histogram ''' norm=normalize # Whether to normalize the histogram if plot: plt.figure() plt.xlabel('Pixel value') plt.ylabel('Counts') plt.title('Pixel Intensity Histogram') if log_scale: plt.yscale('log') if mode=='simple': hist, bin_edges = np.histogram(img_arr.flatten(), bins=bins, density=norm, range=hist_range) centers = (bin_edges[:-1] + bin_edges[1:]) / 2 if plot: plt.errorbar(centers, hist, fmt='o-', label=label) return hist,None elif mode=='avg': ### Compute histogram for each image. mean,err,centers=f_batch_histogram(img_arr,bins,norm,hist_range) if plot: plt.errorbar(centers,mean,yerr=err,fmt='o-',label=label) return mean,err def f_compare_pixel_intensity(img_lst,label_lst=['img1','img2'],bkgnd_arr=[],log_scale=True, normalize=True, mode='avg',bins=25, hist_range=None): ''' Module to compute and plot histogram for pixel intensity of images Has 2 modes : simple and avg simple mode: No errors. Just flatten the input image array and compute histogram of full data avg mode(Default) : - Compute histogram for each image in the image array - Compute errors across each histogram bkgnd_arr : histogram of this array is plotting with +/- sigma band ''' norm=normalize # Whether to normalize the histogram plt.figure() ## Plot background distribution if len(bkgnd_arr): if mode=='simple': hist, bin_edges = np.histogram(bkgnd_arr.flatten(), bins=bins, density=norm, range=hist_range) centers = (bin_edges[:-1] + bin_edges[1:]) / 2 plt.errorbar(centers, hist, color='k',marker='*',linestyle=':', label='bkgnd') elif mode=='avg': ### Compute histogram for each image. mean,err,centers=f_batch_histogram(bkgnd_arr,bins,norm,hist_range) plt.plot(centers,mean,linestyle=':',color='k',label='bkgnd') plt.fill_between(centers, mean - err, mean + err, color='k', alpha=0.4) ### Plot the rest of the datasets for img,label,mrkr in zip(img_lst,label_lst,itertools.cycle('>^*sDHPdpx_')): if mode=='simple': hist, bin_edges = np.histogram(img.flatten(), bins=bins, density=norm, range=hist_range) centers = (bin_edges[:-1] + bin_edges[1:]) / 2 plt.errorbar(centers, hist, fmt=mrkr+'-', label=label) elif mode=='avg': ### Compute histogram for each image. mean,err,centers=f_batch_histogram(img,bins,norm,hist_range) # print('Centers',centers) plt.errorbar(centers,mean,yerr=err,fmt=mrkr+'-',label=label) if log_scale: plt.yscale('log') # plt.xscale('symlog',linthreshx=50) plt.legend() plt.xlabel('Pixel value') plt.ylabel('Counts') plt.title('Pixel Intensity Histogram') ### Spectrum plot functions ## numpy code def f_radial_profile_3d(data, center=(None,None)): ''' Module to compute radial profile of a 2D image ''' z, y, x = np.indices((data.shape)) # Get a grid of x and y values center=[] if not center: center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0, (z.max()-z.min())/2.0]) # compute centers # get radial values of every pair of points r = np.sqrt((x - center[0])**2 + (y - center[1])**2+ + (z - center[2])**2) r = r.astype(np.int) # Compute histogram of r values tbin = np.bincount(r.ravel(), data.ravel()) nr = np.bincount(r.ravel()) radialprofile = tbin / nr return radialprofile[1:-1] def f_compute_spectrum_3d(arr): ''' compute spectrum for a 3D image ''' # GLOBAL_MEAN=1.0 # arr=((arr - GLOBAL_MEAN)/GLOBAL_MEAN) y1=np.fft.fftn(arr) y1=np.fft.fftshift(y1) # print(y1.shape) y2=abs(y1)**2 z1=f_radial_profile_3d(y2) return(z1) def f_batch_spectrum_3d(arr): batch_pk=np.array([f_compute_spectrum_3d(i) for i in arr]) return batch_pk ### Code ### def f_image_spectrum_3d(x,num_channels): ''' Compute spectrum when image has a channel index Data has to be in the form (batch,channel,x,y) ''' mean=[[] for i in range(num_channels)] sdev=[[] for i in range(num_channels)] for i in range(num_channels): arr=x[:,i,:,:,:] # print(i,arr.shape) batch_pk=f_batch_spectrum_3d(arr) # print(batch_pk) mean[i]=np.mean(batch_pk,axis=0) sdev[i]=np.var(batch_pk,axis=0) mean=np.array(mean) sdev=np.array(sdev) return mean,sdev def f_plot_spectrum_3d(img_arr,plot=False,label='input',log_scale=True): ''' Module to compute Average of the 1D spectrum for a batch of 3d images ''' num = img_arr.shape[0] Pk = f_batch_spectrum_3d(img_arr) #mean,std = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0]) mean,std = np.mean(Pk, axis=0),np.std(Pk, axis=0) k=np.arange(len(mean)) if plot: plt.figure() plt.plot(k, mean, 'k:') plt.plot(k, mean + std, 'k-',label=label) plt.plot(k, mean - std, 'k-') # plt.xscale('log') if log_scale: plt.yscale('log') plt.ylabel(r'$P(k)$') plt.xlabel(r'$k$') plt.title('Power Spectrum') plt.legend() return mean,std def f_compare_spectrum_3d(img_lst,label_lst=['img1','img2'],bkgnd_arr=[],log_scale=True): ''' Compare the spectrum of 2 sets s: img_lst contains the set of images arrays, Each is of the form (num_images,height,width) label_lst contains the labels used in the plot ''' plt.figure() ## Plot background distribution if len(bkgnd_arr): Pk= f_batch_spectrum_3d(bkgnd_arr) mean,err = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0]) k=np.arange(len(mean)) plt.plot(k, mean,color='k',linestyle='-',label='bkgnd') plt.fill_between(k, mean - err, mean + err, color='k',alpha=0.8) for img_arr,label,mrkr in zip(img_lst,label_lst,itertools.cycle('>^*sDHPdpx_')): Pk= f_batch_spectrum_3d(img_arr) mean,err = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0]) k=np.arange(len(mean)) # print(mean.shape,std.shape) plt.fill_between(k, mean - err, mean + err, alpha=0.4) plt.plot(k, mean, marker=mrkr, linestyle=':',label=label) if log_scale: plt.yscale('log') plt.ylabel(r'$P(k)$') plt.xlabel(r'$k$') plt.title('Power Spectrum') plt.legend() ###Output _____no_output_____ ###Markdown Main code Read data ###Code fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_const_params_128cube/full_with_smoothing_1.npy' a1=np.load(fname,mmap_mode='r')[:100] ## Array a1 should have dimensions: (num_batches, num_channels, xsize,ysize,zsize) print(a1.shape) ### Create two smaller arrays for comparison. Take only 1st channel arr1=a1[:50,0,:,:];arr2=a1[50:100,0,:,:] print(arr1.shape,arr2.shape) ###Output (100, 1, 128, 128, 128) (50, 128, 128, 128) (50, 128, 128, 128) ###Markdown Plot 2D grid ###Code img=arr1[:16,:,:,0] f_plot_grid(img,cols=8,fig_size=(12,3)) ###Output 2 8 ###Markdown Pixel intensity histogram ###Code ## Creating lists to make comparison plots. img_lst=[arr1,arr2] label_lst=['1','2'] f_compare_pixel_intensity(img_lst,label_lst=label_lst,bkgnd_arr=[],log_scale=True, normalize=True, mode='avg',bins=25, hist_range=None) ###Output _____no_output_____ ###Markdown Plot spectrum ###Code img_lst=[arr1[:5],arr2[20:45]] label_lst=['1','2'] f_compare_spectrum_3d(img_lst) ! jupyter nbconvert --to script 3_inspect_images.ipynb ###Output [NbConvertApp] Converting notebook 3_inspect_images.ipynb to script [NbConvertApp] Writing 10646 bytes to 3_inspect_images.py
FeatureCollection/creating_feature.ipynb
###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time. ###Code # %%capture # !pip install earthengine-api # !pip install geehydro ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error. ###Code # ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output {'type': 'Feature', 'geometry': {'type': 'Polygon', 'coordinates': [[[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]]]}, 'properties': {'bar': 'tart', 'foo': 42}} ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as geemap except: import geemap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ###Code Map = geemap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages: ###Code from pydeck_earthengine_layers import EarthEngineLayer import pydeck as pdk import requests import ee ###Output _____no_output_____ ###Markdown AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication. ###Code try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create MapNext it's time to create a map. Here we create an `ee.Image` object ###Code # Initialize objects ee_layers = [] view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45) # %% # Add Earth Engine dataset # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) ee_layers.append(EarthEngineLayer(ee_object=polyFeature, vis_params={})) ###Output _____no_output_____ ###Markdown Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map: ###Code r = pdk.Deck(layers=ee_layers, initial_view_state=view_state) r.show() ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. ###Code # %%capture # !pip install earthengine-api # !pip install geehydro ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error. ###Code # ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output {'type': 'Feature', 'geometry': {'type': 'Polygon', 'coordinates': [[[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]]]}, 'properties': {'bar': 'tart', 'foo': 42}} ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. ###Code import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. ###Code try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output {'type': 'Feature', 'geometry': {'type': 'Polygon', 'coordinates': [[[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]]]}, 'properties': {'bar': 'tart', 'foo': 42}} ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function. ###Code Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet. ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ###Code Map = geemap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function. ###Code Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset # Create an ee.Geometry. polygon = ee.Geometry.Polygon([ [[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]] ]) # Create a Feature from the Geometry. polyFeature = ee.Feature(polygon, {'foo': 42, 'bar': 'tart'}) print(polyFeature.getInfo()) Map.addLayer(polyFeature, {}, 'feature') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____
example/WallStreetLectures/ipython/lecture2_DemoStrategy.ipynb
###Markdown DemoStrategy 视频中介绍了针对四只股票的等权重投资策略 本段代码展示了利用quantOS系统进行策略回测及仿真交易的具体步骤。 1. 环境设置 ###Code # -*- encoding: utf-8 -*- import time import pandas as pd import numpy as np from jaqs.data import DataApi from jaqs.data import RemoteDataService from jaqs.trade import AlphaBacktestInstance from jaqs.trade import PortfolioManager #from jaqs.trade import RealTimeTradeApi import jaqs.util as jutil import jaqs.trade.analyze as ana from jaqs.trade import AlphaStrategy from jaqs.trade import AlphaTradeApi from jaqs.trade import model from jaqs.data import DataView # 设置文件存储路径 dataview_dir_path = 'demoStrategy/dataview' backtest_result_dir_path = 'demoStrategy' # 设置服务器地址、用户名密码 # 例如: # data_config = { # "remote.data.address": "tcp://data.tushare.org:8910", # "remote.data.username": '18688888888', # "remote.data.password": '23sdjfk209d0fs9dejkl2j3k4j9d0fsdf'} # 如果没有使用quantos金融终端,请自行替换phone,token内容 import os phone = os.environ.get("QUANTOS_USER") token = os.environ.get("QUANTOS_TOKEN") data_config = { "remote.data.address": "tcp://data.quantos.org:8910", "remote.data.username": phone, "remote.data.password": token} trade_config = { "remote.trade.address": "tcp://gw.quantos.org:8901", "remote.trade.username": phone, "remote.trade.password": token} ###Output _____no_output_____ ###Markdown 修改自己的策略号(仿真交易使用) ###Code # 设置Strategy number, 根据自己的实际情况设置 # 例如:StrategyNo = 1043 StrategyNo = 1008 ###Output _____no_output_____ ###Markdown 2. 参数设置 ###Code # ------------------------------------------------------------------------------- # 设置目标股票、业绩基准、权重、时间 # ------------------------------------------------------------------------------- symbol_weights = {'600519.SH': 0.25, '600036.SH': 0.25, '601318.SH': 0.25, '000651.SZ': 0.25} benchmark = '000300.SH' my_symbols = ','.join(symbol_weights.keys()) start_date = 20170201 end_date = 20171001 # 定义权重函数 def stockWeight(context, user_options=None): return pd.Series(symbol_weights) ###Output _____no_output_____ ###Markdown 3. 回测 ###Code # ------------------------------------------------------------------------------- # Main code 这个代码框不需要修改 # ------------------------------------------------------------------------------- def test_save_dataview(): ds = RemoteDataService() ds.init_from_config(data_config) dv = DataView() props = {'start_date': start_date, 'end_date': end_date, 'fields': 'sw1', 'symbol': my_symbols, 'freq': 1} dv.init_from_config(props, ds) dv.prepare_data() # set the benchmark res, _ = ds.daily(benchmark, start_date=dv.start_date, end_date=dv.end_date) dv._data_benchmark = res.set_index('trade_date').loc[:, ['close']] dv.save_dataview(folder_path=dataview_dir_path) def test_alpha_strategy_dataview(): dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) props = { "symbol": dv.symbol, "universe": ','.join(dv.symbol), "start_date": dv.start_date, "end_date": dv.end_date, "period": "week", "days_delay": 0, "init_balance": 1e7, "position_ratio": 1.0, "commission_rate": 2E-4 # 手续费万2 } props.update(data_config) props.update(trade_config) trade_api = AlphaTradeApi() signal_model = model.FactorSignalModel() signal_model.add_signal('stockWeight', stockWeight) strategy = AlphaStrategy(signal_model=signal_model, pc_method='factor_value_weight') pm = PortfolioManager() bt = AlphaBacktestInstance() context = model.Context(dataview=dv, instance=bt, strategy=strategy, trade_api=trade_api, pm=pm) signal_model.register_context(context) bt.init_from_config(props) bt.run_alpha() bt.save_results(folder_path=backtest_result_dir_path) def test_backtest_analyze(): ta = ana.AlphaAnalyzer() dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) ta.initialize(dataview=dv, file_folder=backtest_result_dir_path) ta.do_analyze(result_dir=backtest_result_dir_path, selected_sec=ta.universe, brinson_group=None) # 运行这里跑回测 test_save_dataview() test_alpha_strategy_dataview() test_backtest_analyze() ###Output _____no_output_____ ###Markdown 回测显示运行完成后,报告可从上面对话框最后一行的地址中找到 `HTML report: ...\demoStrategy\report.html` 4. 仿真交易 4.1 初始化交易API ###Code from jaqs.trade.tradeapi import TradeApi tapi = TradeApi(trade_config['remote.trade.address']) user_info, msg = tapi.login(trade_config['remote.trade.username'], trade_config['remote.trade.password']) tapi.use_strategy(StrategyNo) #改成用户自己的 strategy号 res, msg = tapi.query_account() money = res.loc[0, 'enable_balance'] print("Balance we have: {}".format(money)) ###Output _____no_output_____ ###Markdown 4.2 构造目标订单 ###Code data_api = DataApi(data_config['remote.data.address']) user_info, msg = data_api.login(phone, token) quotes, msg = data_api.quote('600519.SH,600036.SH,601318.SH,000651.SZ') dic_price = {'600519.SH': quotes['last']['600519.SH'], '600036.SH': quotes['last']['600036.SH'], '601318.SH': quotes['last']['601318.SH'], '000651.SZ': quotes['last']['000651.SZ']} # 每只股票等金额投资,各买入30万元 dic_shares = {k: 100 * np.floor(money * 0.012 * symbol_weights[k] / dic_price[k]/100) for k, _ in symbol_weights.items()} dic_shares orders = [] for symbol in symbol_weights.keys(): o = {'security': symbol, 'price': dic_price[symbol], 'size': dic_shares[symbol], 'action': 'Buy'} orders.append(o) orders ###Output _____no_output_____ ###Markdown 4.3 发送订单买入股票 ###Code # 买入股票 task_id, msg = tapi.place_batch_order(orders) print(task_id) print(msg) ###Output _____no_output_____ ###Markdown 4.4 查询订单信息 ###Code orders, msg = tapi.query_order(task_id) orders # 做空一手沪深300股指期货做对冲 # task_id, msg = tapi.place_order("IF1712.CFE", "Short", 4003.6, 1) # print(task_id) # print(msg) ###Output _____no_output_____ ###Markdown DemoStrategy 视频中介绍了针对四只股票的等权重投资策略 本段代码展示了利用quantOS系统进行策略回测及仿真交易的具体步骤。 1. 环境设置 ###Code # -*- encoding: utf-8 -*- import time import pandas as pd import numpy as np from jaqs.data import DataApi from jaqs.data import RemoteDataService from jaqs.trade import AlphaBacktestInstance from jaqs.trade import PortfolioManager #from jaqs.trade import RealTimeTradeApi import jaqs.util as jutil import jaqs.trade.analyze as ana from jaqs.trade import AlphaStrategy from jaqs.trade import AlphaTradeApi from jaqs.trade import model from jaqs.data import DataView # 设置文件存储路径 dataview_dir_path = 'demoStrategy/dataview' backtest_result_dir_path = 'demoStrategy' # 设置服务器地址、用户名密码 # 例如: # data_config = { # "remote.data.address": "tcp://data.quantos.org:8910", # "remote.data.username": '18688888888', # "remote.data.password": '23sdjfk209d0fs9dejkl2j3k4j9d0fsdf'} # 如果没有使用quantos金融终端,请自行替换phone,token内容 import os phone = os.environ.get("QUANTOS_USER") token = os.environ.get("QUANTOS_TOKEN") data_config = { "remote.data.address": "tcp://data.quantos.org:8910", "remote.data.username": phone, "remote.data.password": token} trade_config = { "remote.trade.address": "tcp://gw.quantos.org:8901", "remote.trade.username": phone, "remote.trade.password": token} ###Output _____no_output_____ ###Markdown 修改自己的策略号(仿真交易使用) ###Code # 设置Strategy number, 根据自己的实际情况设置 # 例如:StrategyNo = 1043 StrategyNo = 1008 ###Output _____no_output_____ ###Markdown 2. 参数设置 ###Code # ------------------------------------------------------------------------------- # 设置目标股票、业绩基准、权重、时间 # ------------------------------------------------------------------------------- symbol_weights = {'600519.SH': 0.25, '600036.SH': 0.25, '601318.SH': 0.25, '000651.SZ': 0.25} benchmark = '000300.SH' my_symbols = ','.join(symbol_weights.keys()) start_date = 20170201 end_date = 20171001 # 定义权重函数 def stockWeight(context, user_options=None): return pd.Series(symbol_weights) ###Output _____no_output_____ ###Markdown 3. 回测 ###Code # ------------------------------------------------------------------------------- # Main code 这个代码框不需要修改 # ------------------------------------------------------------------------------- def test_save_dataview(): ds = RemoteDataService() ds.init_from_config(data_config) dv = DataView() props = {'start_date': start_date, 'end_date': end_date, 'fields': 'sw1', 'symbol': my_symbols, 'freq': 1} dv.init_from_config(props, ds) dv.prepare_data() # set the benchmark res, _ = ds.daily(benchmark, start_date=dv.start_date, end_date=dv.end_date) dv._data_benchmark = res.set_index('trade_date').loc[:, ['close']] dv.save_dataview(folder_path=dataview_dir_path) def test_alpha_strategy_dataview(): dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) props = { "symbol": dv.symbol, "universe": ','.join(dv.symbol), "start_date": dv.start_date, "end_date": dv.end_date, "period": "week", "days_delay": 0, "init_balance": 1e7, "position_ratio": 1.0, "commission_rate": 2E-4 # 手续费万2 } props.update(data_config) props.update(trade_config) trade_api = AlphaTradeApi() signal_model = model.FactorSignalModel() signal_model.add_signal('stockWeight', stockWeight) strategy = AlphaStrategy(signal_model=signal_model, pc_method='factor_value_weight') pm = PortfolioManager() bt = AlphaBacktestInstance() context = model.Context(dataview=dv, instance=bt, strategy=strategy, trade_api=trade_api, pm=pm) signal_model.register_context(context) bt.init_from_config(props) bt.run_alpha() bt.save_results(folder_path=backtest_result_dir_path) def test_backtest_analyze(): ta = ana.AlphaAnalyzer() dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) ta.initialize(dataview=dv, file_folder=backtest_result_dir_path) ta.do_analyze(result_dir=backtest_result_dir_path, selected_sec=ta.universe, brinson_group=None) # 运行这里跑回测 test_save_dataview() test_alpha_strategy_dataview() test_backtest_analyze() ###Output _____no_output_____ ###Markdown 回测显示运行完成后,报告可从上面对话框最后一行的地址中找到 `HTML report: ...\demoStrategy\report.html` 4. 仿真交易 4.1 初始化交易API ###Code from jaqs.trade.tradeapi import TradeApi tapi = TradeApi(trade_config['remote.trade.address']) user_info, msg = tapi.login(trade_config['remote.trade.username'], trade_config['remote.trade.password']) tapi.use_strategy(StrategyNo) #改成用户自己的 strategy号 res, msg = tapi.query_account() money = res.loc[0, 'enable_balance'] print("Balance we have: {}".format(money)) ###Output _____no_output_____ ###Markdown 4.2 构造目标订单 ###Code data_api = DataApi(data_config['remote.data.address']) user_info, msg = data_api.login(phone, token) quotes, msg = data_api.quote('600519.SH,600036.SH,601318.SH,000651.SZ') dic_price = {'600519.SH': quotes['last']['600519.SH'], '600036.SH': quotes['last']['600036.SH'], '601318.SH': quotes['last']['601318.SH'], '000651.SZ': quotes['last']['000651.SZ']} # 每只股票等金额投资,各买入30万元 dic_shares = {k: 100 * np.floor(money * 0.012 * symbol_weights[k] / dic_price[k]/100) for k, _ in symbol_weights.items()} dic_shares orders = [] for symbol in symbol_weights.keys(): o = {'security': symbol, 'price': dic_price[symbol], 'size': dic_shares[symbol], 'action': 'Buy'} orders.append(o) orders ###Output _____no_output_____ ###Markdown 4.3 发送订单买入股票 ###Code # 买入股票 task_id, msg = tapi.place_batch_order(orders) print(task_id) print(msg) ###Output _____no_output_____ ###Markdown 4.4 查询订单信息 ###Code orders, msg = tapi.query_order(task_id) orders # 做空一手沪深300股指期货做对冲 # task_id, msg = tapi.place_order("IF1712.CFE", "Short", 4003.6, 1) # print(task_id) # print(msg) ###Output _____no_output_____ ###Markdown DemoStrategy 视频中介绍了针对四只股票的等权重投资策略,本段代码展示了利用quantOS系统进行策略回测及仿真交易的具体步骤。 在程序运行之前,需要您在环境设置中更改以下参数: 1. 将YourPhoneNo.改为您在quantOS网站注册的手机号; 2. 将YourToken改为您的token; 3. 将YourStrategyNo.改为您的策略号。 1. 环境设置 ###Code # -*- encoding: utf-8 -*- import time import pandas as pd import numpy as np from jaqs.data import RemoteDataService from jaqs.trade import AlphaBacktestInstance from jaqs.trade import PortfolioManager #from jaqs.trade import RealTimeTradeApi import jaqs.util as jutil import jaqs.trade.analyze as ana from jaqs.trade import AlphaStrategy from jaqs.trade import AlphaTradeApi from jaqs.trade import model from jaqs.data import DataView # 设置文件存储路径 dataview_dir_path = 'demoStrategy/dataview' backtest_result_dir_path = 'demoStrategy' # 设置服务器地址、用户名密码 # 例如: # data_config = { # "remote.data.address": "tcp://data.tushare.org:8910", # "remote.data.username": '18688888888', # "remote.data.password": '23sdjfk209d0fs9dejkl2j3k4j9d0fsdf'} phone = "xxxx" token = "xxxxxxxxxx" data_config = { "remote.data.address": "tcp://data.tushare.org:8910", "remote.data.username": phone, "remote.data.password": token} trade_config = { "remote.trade.address": "tcp://gw.quantos.org:8901", "remote.trade.username": phone, "remote.trade.password": token} # 设置Strategy number # 例如:StrategyNo = 1043 StrategyNo = 'YourStrategyNo.' ###Output _____no_output_____ ###Markdown 2. 参数设置 ###Code # ------------------------------------------------------------------------------- # 设置目标股票、业绩基准、权重、时间 # ------------------------------------------------------------------------------- symbol_weights = {'600519.SH': 0.25, '600036.SH': 0.25, '601318.SH': 0.25, '000651.SZ': 0.25} benchmark = '000300.SH' my_symbols = ','.join(symbol_weights.keys()) start_date = 20170201 end_date = 20171001 # 定义权重函数 def stockWeight(context, user_options=None): return pd.Series(symbol_weights) ###Output _____no_output_____ ###Markdown 3. 回测 ###Code # ------------------------------------------------------------------------------- # Main code 这个代码框不需要修改 # ------------------------------------------------------------------------------- def test_save_dataview(): ds = RemoteDataService() ds.init_from_config(data_config) dv = DataView() props = {'start_date': start_date, 'end_date': end_date, 'fields': 'sw1', 'symbol': my_symbols, 'freq': 1} dv.init_from_config(props, ds) dv.prepare_data() # set the benchmark res, _ = ds.daily(benchmark, start_date=dv.start_date, end_date=dv.end_date) dv._data_benchmark = res.set_index('trade_date').loc[:, ['close']] dv.save_dataview(folder_path=dataview_dir_path) def test_alpha_strategy_dataview(): dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) props = { "symbol": dv.symbol, "universe": ','.join(dv.symbol), "start_date": dv.start_date, "end_date": dv.end_date, "period": "week", "days_delay": 0, "init_balance": 1e7, "position_ratio": 1.0, "commission_rate": 2E-4 # 手续费万2 } props.update(data_config) props.update(trade_config) trade_api = AlphaTradeApi() signal_model = model.FactorSignalModel() signal_model.add_signal('stockWeight', stockWeight) strategy = AlphaStrategy(signal_model=signal_model, pc_method='factor_value_weight') pm = PortfolioManager() bt = AlphaBacktestInstance() context = model.Context(dataview=dv, instance=bt, strategy=strategy, trade_api=trade_api, pm=pm) signal_model.register_context(context) bt.init_from_config(props) bt.run_alpha() bt.save_results(folder_path=backtest_result_dir_path) def test_backtest_analyze(): ta = ana.AlphaAnalyzer() dv = DataView() dv.load_dataview(folder_path=dataview_dir_path) ta.initialize(dataview=dv, file_folder=backtest_result_dir_path) ta.do_analyze(result_dir=backtest_result_dir_path, selected_sec=ta.universe, brinson_group=None) # 运行这里跑回测 test_save_dataview() test_alpha_strategy_dataview() test_backtest_analyze() ###Output Begin: DataApi login 18612562791@tcp://data.tushare.org:8910 login success Initialize config success. Query data... Query data - query... NOTE: price adjust method is [post adjust] Query data - daily fields prepared. Query instrument info... Query adj_factor... Query groups (industry)... Data has been successfully prepared. Store data... Dataview has been successfully saved to: C:\Users\jfang\ipython\demoStrategy\dataview You can load it with load_dataview('C:\Users\jfang\ipython\demoStrategy\dataview') Dataview loaded successfully. AlphaStrategy Initialized. =======new day 20170203 Before 20170203 re-balance: available cash all (exclude suspensions) = 1.0000e+07 =======new day 20170206 Before 20170206 re-balance: available cash all (exclude suspensions) = 9.9970e+06 =======new day 20170213 Before 20170213 re-balance: available cash all (exclude suspensions) = 1.0272e+07 =======new day 20170220 Before 20170220 re-balance: available cash all (exclude suspensions) = 1.0588e+07 =======new day 20170227 Before 20170227 re-balance: available cash all (exclude suspensions) = 1.0392e+07 =======new day 20170306 Before 20170306 re-balance: available cash all (exclude suspensions) = 1.0421e+07 =======new day 20170313 Before 20170313 re-balance: available cash all (exclude suspensions) = 1.0586e+07 =======new day 20170320 Before 20170320 re-balance: available cash all (exclude suspensions) = 1.0679e+07 =======new day 20170327 Before 20170327 re-balance: available cash all (exclude suspensions) = 1.0875e+07 =======new day 20170405 Before 20170405 re-balance: available cash all (exclude suspensions) = 1.1042e+07 =======new day 20170410 Before 20170410 re-balance: available cash all (exclude suspensions) = 1.0972e+07 =======new day 20170417 Before 20170417 re-balance: available cash all (exclude suspensions) = 1.1001e+07 =======new day 20170424 Before 20170424 re-balance: available cash all (exclude suspensions) = 1.1102e+07 =======new day 20170502 Before 20170502 re-balance: available cash all (exclude suspensions) = 1.1384e+07 =======new day 20170508 Before 20170508 re-balance: available cash all (exclude suspensions) = 1.1185e+07 =======new day 20170515 Before 20170515 re-balance: available cash all (exclude suspensions) = 1.1912e+07 =======new day 20170522 Before 20170522 re-balance: available cash all (exclude suspensions) = 1.2314e+07 =======new day 20170531 Before 20170531 re-balance: available cash all (exclude suspensions) = 1.2731e+07 =======new day 20170605 Before 20170605 re-balance: available cash all (exclude suspensions) = 1.2575e+07 =======new day 20170612 Before 20170612 re-balance: available cash all (exclude suspensions) = 1.3607e+07 =======new day 20170619 Before 20170619 re-balance: available cash all (exclude suspensions) = 1.3395e+07 =======new day 20170626 Before 20170626 re-balance: available cash all (exclude suspensions) = 1.4174e+07 =======new day 20170703 Before 20170703 re-balance: available cash all (exclude suspensions) = 1.4002e+07 =======new day 20170710 Before 20170710 re-balance: available cash all (exclude suspensions) = 1.4163e+07 =======new day 20170717 Before 20170717 re-balance: available cash all (exclude suspensions) = 1.4825e+07 =======new day 20170724 Before 20170724 re-balance: available cash all (exclude suspensions) = 1.5073e+07 =======new day 20170731 Before 20170731 re-balance: available cash all (exclude suspensions) = 1.4850e+07 =======new day 20170807 Before 20170807 re-balance: available cash all (exclude suspensions) = 1.4856e+07 =======new day 20170814 Before 20170814 re-balance: available cash all (exclude suspensions) = 1.4737e+07 =======new day 20170821 Before 20170821 re-balance: available cash all (exclude suspensions) = 1.4929e+07 =======new day 20170828 Before 20170828 re-balance: available cash all (exclude suspensions) = 1.5544e+07 =======new day 20170904 Before 20170904 re-balance: available cash all (exclude suspensions) = 1.5298e+07 =======new day 20170911 Before 20170911 re-balance: available cash all (exclude suspensions) = 1.4900e+07 =======new day 20170918 Before 20170918 re-balance: available cash all (exclude suspensions) = 1.5037e+07 =======new day 20170925 Before 20170925 re-balance: available cash all (exclude suspensions) = 1.5265e+07 Backtest done. 202 days, 1.34e+02 trades in total. Backtest results has been successfully saved to: C:\Users\jfang\ipython\demoStrategy Dataview loaded successfully. process trades... get daily stats... calc strategy return... Plot single securities PnL Plot strategy PnL... generate report... HTML report: C:\Users\jfang\ipython\demoStrategy\report.html ###Markdown 回测显示运行完成后,报告可从上面对话框最后一行的地址中找到 `HTML report: ...\demoStrategy\report.html` 4. 仿真交易 ###Code from jaqs.trade.tradeapi import TradeApi tapi = TradeApi(trade_config['remote.trade.address']) def on_orderstatus(order): print("on_orderstatus:") #, order for key in order: print("%20s : %s" % (key, str(order[key]))) print("") # 成交回报推送 def on_trade(trade): print("on_trade:") for key in trade: print("%20s : %s" % (key, str(trade[key]))) print("") # 委托任务执行状态推送 # 通常可以忽略该回调函数 def on_taskstatus(task): print("on_taskstatus:") for key in task: print("%20s : %s" % (key, str(task[key]))) print("") tapi.set_ordstatus_callback(on_orderstatus) tapi.set_trade_callback(on_trade) tapi.set_task_callback(on_taskstatus) user_info, msg = tapi.login(trade_config['remote.trade.username'], trade_config['remote.trade.password']) print(user_info) tapi.use_strategy(StrategyNo) #改成用户自己的 strategy号 res, msg = tapi.query_account() print(res) money = res.loc[0, 'enable_balance'] print("Balance we have: {}".format(money)) dic_price = {'600519.SH': 637.70, '600036.SH': 28.72, '601318.SH': 63.29, '000651.SZ': 41.96} # 每只股票等金额投资,各买入30万元 dic_shares = {k: 100 * np.floor(money * 0.012 * symbol_weights[k] / dic_price[k]/100) for k, _ in symbol_weights.items()} dic_shares orders = [] for symbol in symbol_weights.keys(): o = {'security': symbol, 'price': dic_price[symbol], 'size': dic_shares[symbol], 'action': 'Buy'} orders.append(o) orders # 买入股票 task_id, msg = tapi.place_batch_order(orders) print(task_id) print(msg) # 做空一手沪深300股指期货做对冲 # task_id, msg = tapi.place_order("IF1712.CFE", "Short", 4003.6, 1) # print(task_id) # print(msg) ###Output 0 0,IF1712.CFE not in UNIVERSE
bioinformatics/fastq.ipynb
###Markdown First line is the name of the readThen we get the reads of the genomeThen the + which is a placeholderThen the quality score, which is a symbol and can be translated into a number which designates quality. You look at the quality table and then thransform the symbol into it's unicode code point (ASCII code) and substract 33. ###Code def read_fastq(filename): sequences = [] qualities = [] with open(filename) as fh: while True: fh.readline() seq = fh.readline().rstrip() fh.readline() qual = fh.readline().rstrip() if len(seq) == 0: break sequences.append(seq) qualities.append(qual) return sequences, qualities seqs, quals = read_fastq("data/bioinformatics/SRR835775_1.first1000.fastq") seqs[:5] quals[:5] def phred33ToQ(qual): return ord(qual) - 33 phred33ToQ("#") phred33ToQ("G") def createHist(qualities): hist = [0] * 50 for qual in qualities: for phred in qual: q = phred33ToQ(phred) hist[q] += 1 return hist h = createHist(quals) print(h) import pandas as pd %matplotlib inline df = pd.DataFrame(h) df.head() df.plot.bar() def findGCByPOs(reads): gc = [0] * 100 # make the lists 100 numbers long totals = [0] * 100 for read in reads: for i in range(len(read)): if read[i] == "C" or read[i] == "G": gc[i] += 1 totals[i] += 1 for i in range(len(gc)): if totals[i] > 0: gc[i] /= float(totals[i]) return gc gc = findGCByPOs(seqs) gc_df = pd.DataFrame(gc) gc_df.head() # plt.plot(range(len(gc)), gc) # plt.show() gc_df.plot.line() import collections count = collections.Counter() for seq in seqs: count.update(seq) print(count) ###Output Counter({'G': 28742, 'C': 28272, 'T': 21836, 'A': 21132, 'N': 18})
CH4/Chapter 4 - Exercise 2 and Activity.ipynb
###Markdown Exercise 2 In this exercise we will apply the PCA algorithm to the faces of some famous people. As discussed earlier, this is a real world example of how PCA is being used to reduce dimensionality of images where data size is significantly reduced so it is easier to train machine learning algorithms while maintaining the most important features of the image data. Let's download, load and visualize a sample of the images: ###Code from sklearn.datasets import fetch_lfw_people from sklearn.decomposition import PCA import matplotlib.pyplot as plt import numpy as np lfw_people = fetch_lfw_people(min_faces_per_person=60) # introspect the images arrays to find the shapes (for plotting) n_samples, h, w = lfw_people.images.shape # for machine learning we use the 2 data directly (as relative pixel # positions info is ignored by this model) faces = lfw_people.data n_features = faces.shape[1] # the label to predict is the id of the person labels = lfw_people.target target_names = lfw_people.target_names n_classes = target_names.shape[0] print("Total dataset size:") print("n_samples: %d" % n_samples) print("n_features: %d" % n_features) print("n_classes: %d" % n_classes) # Plot the results fig, ax = plt.subplots(1, 10, figsize=(20, 2.5), subplot_kw={'xticks':[], 'yticks':[]}) for i in range(10): ax[i].imshow(faces[i].reshape(62, 47), cmap='binary_r') ax[0].set_ylabel('Face Data') ###Output _____no_output_____ ###Markdown We can see that the images are 62 by 47 pixels and when we flatten the images on a single dimension, that means we end up with 2914 features - as each pixel becomes a feature. We have 1348 images that we will use to train for 8 different people. Now let's fit our PCA model to reduce the dimensionality of the images to 200. ###Code # ############################################################################# # Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled # dataset): unsupervised feature extraction / dimensionality reduction n_components = 200 print("Extracting the top %d eigenfaces from %d faces" % (n_components, faces.shape[0])) pca = PCA(n_components=n_components, svd_solver='randomized', whiten=True).fit(X) ###Output Extracting the top 200 eigenfaces from 1348 faces Projecting the input data on the eigenfaces orthonormal basis ###Markdown We can now plot the explained variance ratio by the number of components. This is an important indication of how many dimensions we should reduce our data to. A good rule of thumb is to reduce the data when we reach the plateau of the diminishing returns. In our case, we could also have fit our model with around 100 components which would have captured 90% of the variance in the data. ###Code plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); print("Projecting the input data on the eigenfaces orthonormal basis") faces_pca = pca.transform(faces) projected = pca.inverse_transform(faces_pca) # Plot the results fig, ax = plt.subplots(2, 8, figsize=(20, 5), subplot_kw={'xticks':[], 'yticks':[]}) for i in range(8): ax[0, i].imshow(faces[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('200-dim\nreconstruction'); ###Output _____no_output_____ ###Markdown We can see that the transformed images have less details and seem to be more blurry, but most important features are still apparent and we can certainly identify who the person is while reducing the data by a factor of 10. The next step would be to train the data to classify images according to the identity labels. Activity In this exercise we will load the Iris dataset which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Then the aim will be to:1a) Fit the PCA algorithm with 3 components and visualize the explained variance ratio by number of components. Therefore we should have the explained variance ratio on one axis, and the number of components on the second axis. 1b) Fir the PCA algorithm with 2 components and visualize the data on a two dimensional space. ###Code iris = datasets.load_iris() X = iris.data y = iris.target target_names = iris.target_names ###Output _____no_output_____ ###Markdown Inspection of data: ###Code X.shape X[0:10] y ###Output _____no_output_____ ###Markdown 1a. Fit the PCA algorithm with 3 components and visualize the explained variance ratio by number of components. ###Code pca = PCA(n_components=3) X_pca = pca.fit(X).transform(X) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); ###Output _____no_output_____ ###Markdown 1b) Fir the PCA algorithm with 2 components and visualize the data on a two dimensional space. ###Code pca = PCA(n_components=2) X_pca = pca.fit(X).transform(X) plt.figure(figsize=(15, 10)) colors = ['navy', 'turquoise', 'darkorange'] lw = 2 for color, i, target_name in zip(colors, [0, 1, 2], target_names): plt.scatter(X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=.8, lw=lw, label=target_name) plt.legend(loc='best', shadow=False, scatterpoints=1) plt.title('PCA of IRIS dataset') plt.show() ###Output _____no_output_____
legacy_tutorials/terra/6_qiskit_jupyter_tools.ipynb
###Markdown ![qiskit_header.png](../../images/qiskit_header.png) Important: This notebook uses ipywidgets that take advantage of the javascript interface in a web browser. The downside is the functionality does not render well on saved notebooks. Run this notebook locally to see the widgets in action. Qiskit Jupyter ToolsQiskit was designed to be used inside of the Jupyter notebook interface. As such it includes many useful routines that take advantage of this platform, and make performing tasks like exploring devices and tracking job progress effortless.Loading all the qiskit Jupyter goodness is done via: ###Code from qiskit import * import qiskit.tools.jupyter # This is the where the magic happens (literally). ###Output _____no_output_____ ###Markdown Table of contents1) [Automatic Job Tracking](tracking)2) [Backend Details](details)3) [Overview of Backends](overview) ###Code IBMQ.load_account(); provider = IBMQ.get_provider(group='open') ###Output _____no_output_____ ###Markdown Automatic Job Tracking Perhaps the most useful Jupyter tool is the `job_watcher`. Once loaded, this widget automatically tracks the jobs submitted by the user, and displays this information in a window floating in the upper left corner of the notebook. To start the monitor you run the Jupyter magic: ###Code %qiskit_job_watcher ###Output _____no_output_____ ###Markdown You should now see a small window titled "IBMQ Jobs" in the upper left corner of the notebook.Now, let's submit a job to a device: ###Code backend = provider.get_backend('ibmq_essex') qc = QuantumCircuit(2, 2) qc.h(0) qc.cx(0, 1) qc.measure([0, 1], [0, 1]) job = execute(qc, backend) ###Output _____no_output_____ ###Markdown Opening the job watcher you will see that the job has been added to the list of jobs, and its status and queue position (if any) are being automatically tracked and updated. If you want to kill the job watcher you can do so by calling: ###Code %qiskit_disable_job_watcher ###Output _____no_output_____ ###Markdown Although the watcher itself is killed, the underlying framework is still tracking jobs for you and will show this information if loaded once again. Viewing Backend Details The IBM Q devices contain a large amount of configuration data and properties. This information can be retrieved by calling: ###Code config = backend.configuration() params = backend.properties() ###Output _____no_output_____ ###Markdown However, parsing through this information quickly becomes tedious. Instead, all the information for a single backend can be displayed graphically by just calling the backend instance itself: ###Code backend ###Output _____no_output_____ ###Markdown This widget displays all the information about a backend in a single tabbed-window. Getting an Overview of Backends Instead of a single backend, you may be interested in seeing all the backends at once to compare, for example, average CNOT error rates. This is done using the `backend_overview` widget: ###Code %qiskit_backend_overview ###Output _____no_output_____ ###Markdown In addition to showing all the backends that the user has access to, the number of pending jobs on the devices is continuously being updated along with the operational status and least busy selection. ###Code %qiskit_version_table %qiskit_copyright ###Output _____no_output_____ ###Markdown ![qiskit_header.png](../../images/qiskit_header.png) Important: This notebook uses ipywidgets that take advantage of the javascript interface in a web browser. The downside is the functionality does not render well on saved notebooks. Run this notebook locally to see the widgets in action. Qiskit Jupyter ToolsQiskit was designed to be used inside of the Jupyter notebook interface. As such it includes many useful routines that take advantage of this platform, and make performing tasks like exploring devices and tracking job progress effortless.Loading all the qiskit Jupyter goodness is done via: ###Code from qiskit import * import qiskit.tools.jupyter # This is the where the magic happens (literally). ###Output _____no_output_____ ###Markdown Table of contents1) [Automatic Job Tracking](tracking)2) [Backend Details](details)3) [Overview of Backends](overview) ###Code IBMQ.load_account(); provider = IBMQ.get_provider(group='open') ###Output _____no_output_____ ###Markdown Automatic Job Tracking Perhaps the most useful Jupyter tool is the `job_watcher`. Once loaded, this widget automatically tracks the jobs submitted by the user, and displays this information in a window floating in the upper left corner of the notebook. To start the monitor you run the Jupyter magic: ###Code %qiskit_job_watcher ###Output _____no_output_____ ###Markdown You should now see a small window titled "IBMQ Jobs" in the upper left corner of the notebook.Now, let's submit a job to a device: ###Code backend = provider.get_backend('ibmq_essex') qc = QuantumCircuit(2, 2) qc.h(0) qc.cx(0, 1) qc.measure([0,1], [0,1]) job = execute(qc, backend) ###Output _____no_output_____ ###Markdown Opening the job watcher you will see that the job has been added to the list of jobs, and its status and queue position (if any) are being automatically tracked and updated. If you want to kill the job watcher you can do so by calling: ###Code %qiskit_disable_job_watcher ###Output _____no_output_____ ###Markdown Although the watcher itself is killed, the underlying framework is still tracking jobs for you and will show this information if loaded once again. Viewing Backend Details The IBM Q devices contain a large amount of configuration data and properties. This information can be retrieved by calling: ###Code config = backend.configuration() params = backend.properties() ###Output _____no_output_____ ###Markdown However, parsing through this information quickly becomes tedious. Instead, all the information for a single backend can be displayed graphically by just calling the backend instance itself: ###Code backend ###Output _____no_output_____ ###Markdown This widget displays all the information about a backend in a single tabbed-window. Getting an Overview of Backends Instead of a single backend, you may be interested in seeing all the backends at once to compare, for example, average CNOT error rates. This is done using the `backend_overview` widget: ###Code %qiskit_backend_overview ###Output _____no_output_____ ###Markdown In addition to showing all the backends that the user has access to, the number of pending jobs on the devices is continuously being updated along with the operational status and least busy selection. ###Code %qiskit_version_table %qiskit_copyright ###Output _____no_output_____
_notebooks/2022-01-10-intro.ipynb
###Markdown 2022/01/10/MON > `회귀` 회귀 분석은 데이터 값이 평균과 같은 일정한 값으로 돌아가려는 경향으르 이용한 통계학 기법이다. 회귀는 여러 개의 독립 변수와 한 개의 종속 변수 간의 상관관계를 모델링하는 기법을 통칭한다. 예를 들면, 아파트의 방 개수, 방 크기, 주변 학군, 등 여러 개의 독립변수에 따라 아파트 가격이라는 종속변수가 어떤 관계를 나타내는지를 모델링하고 예측하는 것이다. $Y = W_1*X_1 + W_2*X_2 + W_3*X_3 + \dots + W_n*X_n$ 이라는 선형 회귀식을 예로 들면 $Y$는 종속변수, 즉 아파트 가격을 의미하며, 나머지 $X_1,X_2,X_3,\dots,X_n$은 방 개수, 방 크기, 주변 학군 등의 독립 변수를 의미한다. 그리고 $W_1,W_2,W_3,\dots,W_n$은 이 독립변수의 값에 영향을 미치는 회귀 계수이다. 머신 러닝의 관점에서 보면 독립변수는 feature에 해당하며 종속변수는 결정 값, 즉 레이블을 의미한다. 머신러닝 회귀 예측의 핵심은 주어진 feature와 결정 값 데이터 기반에서 학습을 통해 최적의 회귀계수를 찾아내는 것이다. > `회귀 종류` 독립 변수의 개수가 1개이면 단일 회귀, 여러 개이면 다중 회귀이다. 또한 회귀계수의 결합이 선형이면 선형 회귀, 비선형이면 비선형 회귀이다. - `지도학습`은 두 가지 유형으로 나뉘는데 바로 `분류`와 `회귀`이다. 이 두가지 기법의 가장 큰 차이는 분류는 예측값이 카테고리와 같은 이산형 클래스 값이고 회귀는 연속형 숫자값이라는 것이다.- 여러 가지 회귀 중에서 선형 회귀가 가장 많이 사용된다. 선형 회귀는 실제 값과 예측값의 차이(오류의 제곱 값)를 최소화하는 직선형 회귀선을 최적화하는 방식이다. 선형 회귀 모델은 규제 방법에 따라 또 나눌 수 있다. 여기서 규제란 일반적인 선형 회귀의 과적합 문제를 해결하기 위해서 회귀 계수에 페널티 값을 적용하는 것을 의미한다. --- - 단순 선형 회귀에 대해 알아보자 : 독립 변수도 하나 종속 변수도 하나인 선형 회귀를 의미한. 예를 들면 주택 가격이 주택의 크기로만 결정되는 것. - 예측값 $\hat{Y}$는 $w_0 + w_1*X$로 계산할 수 있다. 독립변수가 1개인 단순 선형 회귀에서는 이 기울기 $w_1$과 절편 $w_0$을 회귀 계수로 지칭한다. 그리고 회귀 모델을 $\hat{Y} = w_0 + w_1*X$와 같은 1차 함수로 모델링했다면 실제 주택 가격은 이러한 1차 함수 값에서 실제 값만큼의 오류 값을 뺀 또는 더한 값이 된다. ($w_0 + w_1*X +$ 오류값)- 이렇게 실제 값과 회귀 모델의 차이에 따른 오류 값을 남은 오류, 즉 잔차라고 부른다. 최적의 회귀 모델을 만든다는 것이 바로 전체 데이터의 잔차 합이 최소가 되는 모델을 만든다는 것이며 상쇄될 것을 고려해 대개 절댓값을 취하거나 제곱을 한 뒤 오류 합을 구한다. $Error^2 = RSS $- RSS는 비용이며 $w$변수(회귀 계수)로 구성되는 $RSS$를 비용 함수라고 한다. 머신 러닝 회귀 알고리즘은 데이터를 계속 학습하면서 이 비용 함수가 반환하는 값(즉, 오류값)을 지속해서 감소시키고 최종적으로는 더 이상 감소하지 않는 최소의 오류값을 구하는 것이다. 비용함수를 손실함수라고도 한다. --- > `경사하강법` 점진적으로 반복적인 계산을 통해 $W$ 파라미터 값을 업데이트하면서 오류 값이 최소가 되는 $W$ 파라미터를 구하는 방식이다. 경사 하강법은 반복적으로 비용 함수의 반환 값 즉, 예측값과 실제 값의 차이가 작아지는 방향성을 가지고 $W$ 파라미터를 지속해서 보정해 나간다. 오류를 감소시키는 방향으로 $W$값을 계속 업데이트해 나가면서 더 이상 그 오류 값이 작아지지 않으면 그 오류 값을 최소 비용으로 판단하고 그때의 $W$ 값을 최적 파라미터로 반환한다. - 예를 들어 비용 함수가 포물선 형태의 2차 함수라면 경사 하강법은 최초 $w$에서부터 미분을 적용한 뒤 이 미분 값이 계속 감소하는 방향으로 순차적으로 $w$를 업데이트한다. 마침내 더 이상 미분된 1차 함수의 기울기가 감소하지 않는 지점을 비용 함수가 최소인 지점으로 간주하고 그때의 $w$를 반환한다. - 경사 하강법을 파이썬 코드로 구현해보자 ###Code import numpy as np import matplotlib.pyplot as plt np.random.seed(0) # y = 4X + 6 식을 근사(w1=4, w0=6). random 값은 Noise를 위해 만듬 X = 2 * np.random.rand(100,1) # X의 범위 설정 y = 6 + 4 * X + np.random.randn(100,1) # X, y 데이터 셋 scatter plot으로 시각화 plt.scatter(X, y) # w1 과 w0 를 업데이트 할 w1_update, w0_update를 반환. def get_weight_updates(w1, w0, X, y, learning_rate=0.01): N = len(y) # 먼저 w1_update, w0_update를 각각 w1, w0의 shape와 동일한 크기를 가진 0 값으로 초기화 w1_update = np.zeros_like(w1) w0_update = np.zeros_like(w0) # 예측 배열 계산하고 예측과 실제 값의 차이 계산 y_pred = np.dot(X, w1.T) + w0 diff = y-y_pred # w0_update를 dot 행렬 연산으로 구하기 위해 모두 1값을 가진 행렬 생성 w0_factors = np.ones((N,1)) # w1과 w0을 업데이트할 w1_update와 w0_update 계산 w1_update = -(2/N)*learning_rate*(np.dot(X.T, diff)) w0_update = -(2/N)*learning_rate*(np.dot(w0_factors.T, diff)) return w1_update, w0_update # 입력 인자 iters로 주어진 횟수만큼 반복적으로 w1과 w0를 업데이트 적용함. def gradient_descent_steps(X, y, iters=10000): # w0와 w1을 모두 0으로 초기화. w0 = np.zeros((1,1)) w1 = np.zeros((1,1)) # 인자로 주어진 iters 만큼 반복적으로 get_weight_updates() 호출하여 w1, w0 업데이트 수행. for ind in range(iters): w1_update, w0_update = get_weight_updates(w1, w0, X, y, learning_rate=0.01) w1 = w1 - w1_update w0 = w0 - w0_update return w1, w0 def get_cost(y, y_pred): N = len(y) cost = np.sum(np.square(y - y_pred))/N return cost w1, w0 = gradient_descent_steps(X, y, iters=1000) print("w1:{0:.3f} w0:{1:.3f}".format(w1[0,0], w0[0,0])) y_pred = w1[0,0] * X + w0 print('Gradient Descent Total Cost:{0:.4f}'.format(get_cost(y, y_pred))) ###Output w1:4.022 w0:6.162 Gradient Descent Total Cost:0.9935 ###Markdown y_pred에 기반해 회귀선을 그려보자 ###Code plt.scatter(X, y) plt.plot(X,y_pred) ###Output _____no_output_____ ###Markdown --- - 경사 하강법을 이용해 회귀선이 잘 만들어졌음을 알 수 있다. 일반적으로 경사 하강법은 모든 학습 데이터에 대해 반복적으로 비용함수 최소화를 위한 값을 업데이트하기 때문에 수행 시간이 매우 오래 걸린다. 이 때문에 실전에서는 확률적 경사 하강법(Stochastic Gradient Descent)을 이용한다. 확률적 경사 하강법을 일부 데이터만 이용해 w가 업데이트되는 값을 계산하므로 경사하강법에 비해 빠른 속도를 보장한다. 해보자. ###Code def stochastic_gradient_descent_steps(X, y, batch_size=10, iters=1000): w0 = np.zeros((1,1)) w1 = np.zeros((1,1)) prev_cost = 100000 iter_index =0 for ind in range(iters): np.random.seed(ind) # 전체 X, y 데이터에서 랜덤하게 batch_size만큼 데이터 추출하여 sample_X, sample_y로 저장 stochastic_random_index = np.random.permutation(X.shape[0]) sample_X = X[stochastic_random_index[0:batch_size]] sample_y = y[stochastic_random_index[0:batch_size]] # 랜덤하게 batch_size만큼 추출된 데이터 기반으로 w1_update, w0_update 계산 후 업데이트 w1_update, w0_update = get_weight_updates(w1, w0, sample_X, sample_y, learning_rate=0.01) w1 = w1 - w1_update w0 = w0 - w0_update return w1, w0 ###Output _____no_output_____ ###Markdown 만들어진 함수를 이용해 w1,w0 및 예측 오류 비용을 계산해 보자 ###Code w1, w0 = stochastic_gradient_descent_steps(X, y, iters=1000) print("w1:",round(w1[0,0],3),"w0:",round(w0[0,0],3)) y_pred = w1[0,0] * X + w0 print('Stochastic Gradient Descent Total Cost:{0:.4f}'.format(get_cost(y, y_pred))) ###Output w1: 4.028 w0: 6.156 Stochastic Gradient Descent Total Cost:0.9937 ###Markdown - 확률적 경사 하강법으로 구한 w0,w1 결과는 경사 하강법으로 구한 w1,w0과 큰 차이가 없으며 예측 오류 비용 또한 경사하강법으로 구한 것보다 아주 조금 높다. 즉 성능 차이가 미미하다. 데이터가 크면 확률적 경사 하강법을 이용하라. --- > `사이킷런 LinearRegression을 이용한 보스턴 주택 가격 예측` LinearRegression 클래스는 예측값과 실제 값의 RSS를 최소화 해 OLS 추정 방식으로 구현한 클래스이다. - OLS 기반의 회귀 계수 계산은 입력 feature의 독립성에 많은 영향을 받는다. feature간의 상관관계가 매우 높은 경우 분산이 매우 커져서 오류에 매우 민감해진다. 이러한 현상을 다중공선성(multi-collinearity)문제라고 한다. 일반적으로 상관관계가 높은 feature가 많은 경우 독립적인 중요한 feature만 남기고 제거하거나 규제를 적용한다. 또한 매우 많은 feature가 다중 공선성 문제를 가지고 있다면 PCA를 통해 차원 축소를 수행하는 것도 고려해 볼 수 있다. - 회귀를 위한 평가 지표 : 실제 값과 회귀 예측값의 차이 값을 기반으로 한 지표가 중심이다. - 305p 참고! ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from scipy import stats from sklearn.datasets import load_boston %matplotlib inline # boston 데이타셋 로드 boston = load_boston() # boston 데이타셋 DataFrame 변환 bostonDF = pd.DataFrame(boston.data , columns = boston.feature_names) # boston dataset의 target array는 주택 가격임. 이를 PRICE 컬럼으로 DataFrame에 추가함. bostonDF['PRICE'] = boston.target print('Boston 데이타셋 크기 :',bostonDF.shape) bostonDF.head() ###Output Boston 데이타셋 크기 : (506, 14) ###Markdown 다음으로 각 column이 회귀 결과에 미치는 영향이 어느 정도인지 시각화해서 알아보자. 총 8개의 칼럼에 대해 값이 증가할수록 PRICE값이 어떻게 변화하는지 확인하자. ###Code # 2개의 행과 4개의 열을 가진 subplots를 이용. axs는 4x2개의 ax를 가짐. fig, axs = plt.subplots(figsize=(16,8) , ncols=4 , nrows=2) lm_features = ['RM','ZN','INDUS','NOX','AGE','PTRATIO','LSTAT','RAD'] for i , feature in enumerate(lm_features): row = int(i/4) col = i%4 # 시본의 regplot을 이용해 산점도와 선형 회귀 직선을 함께 표현 sns.regplot(x=feature , y='PRICE',data=bostonDF , ax=axs[row][col]) ###Output _____no_output_____ ###Markdown - 다른 칼럼보다 RM과 LSTAT의 PRICE 영향도가 가장 두드러지게 나타난다. RM은 양 방향의 선형성이 가장 크다. 즉 방의 크기가 클수록 방의 가격이 증가하는 모습을 확연히 보여준다. ###Code from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error , r2_score y_target = bostonDF['PRICE'] X_data = bostonDF.drop(['PRICE'],axis=1,inplace=False) X_train , X_test , y_train , y_test = train_test_split(X_data , y_target ,test_size=0.3, random_state=156) # Linear Regression OLS로 학습/예측/평가 수행. lr = LinearRegression() lr.fit(X_train ,y_train ) y_preds = lr.predict(X_test) mse = mean_squared_error(y_test, y_preds) rmse = np.sqrt(mse) print('MSE : {0:.3f} , RMSE : {1:.3F}'.format(mse , rmse)) print('Variance score : {0:.3f}'.format(r2_score(y_test, y_preds))) print('절편 값:',lr.intercept_) print('회귀 계수값:', np.round(lr.coef_, 1)) ###Output 절편 값: 40.99559517216445 회귀 계수값: [ -0.1 0.1 0. 3. -19.8 3.4 0. -1.7 0.4 -0. -0.9 0. -0.6] ###Markdown coef_ 속성은 회귀 계수 값만 가지고 있으므로 이를 feature별 회귀 계수 값으로 다시 mapping하고 높은 값 순으로 출력해보자. 이를 위해 pandas Series의 sort_values() 함수를 이용한다. ###Code # 회귀 계수를 큰 값 순으로 정렬하기 위해 Series로 생성. index가 컬럼명에 유의 coeff = pd.Series(data=np.round(lr.coef_, 1), index=X_data.columns ) coeff.sort_values(ascending=False) from sklearn.model_selection import cross_val_score y_target = bostonDF['PRICE'] X_data = bostonDF.drop(['PRICE'],axis=1,inplace=False) lr = LinearRegression() # cross_val_score( )로 5 Fold 셋으로 MSE 를 구한 뒤 이를 기반으로 다시 RMSE 구함. neg_mse_scores = cross_val_score(lr, X_data, y_target, scoring="neg_mean_squared_error", cv = 5) rmse_scores = np.sqrt(-1 * neg_mse_scores) avg_rmse = np.mean(rmse_scores) # cross_val_score(scoring="neg_mean_squared_error")로 반환된 값은 모두 음수 print(' 5 folds 의 개별 Negative MSE scores: ', np.round(neg_mse_scores, 2)) print(' 5 folds 의 개별 RMSE scores : ', np.round(rmse_scores, 2)) print(' 5 folds 의 평균 RMSE : {0:.3f} '.format(avg_rmse)) ###Output 5 folds 의 개별 Negative MSE scores: [-12.46 -26.05 -33.07 -80.76 -33.31] 5 folds 의 개별 RMSE scores : [3.53 5.1 5.75 8.99 5.77] 5 folds 의 평균 RMSE : 5.829
ASL.ipynb
###Markdown ###Code !pip install kaggle !mkdir ~/.kaggle !cp /content/kaggle.json ~/.kaggle/kaggle.json !kaggle datasets download -d grassknoted/asl-alphabet !unzip asl-alphabet.zip %cd /content/ import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D,Dropout from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import categorical_crossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import confusion_matrix import itertools import os import shutil import random import glob import matplotlib.pyplot as plt import warnings warnings.simplefilter(action='ignore', category=FutureWarning) %matplotlib inline all = os.listdir("/content/asl_alphabet") os.chdir("/content/asl_alphabet") if os.path.isdir('train') is False: os.mkdir('train') os.mkdir('valid') os.mkdir('test') for i in all: shutil.move(f'{i}', 'train') os.mkdir(f'valid/{i}') os.mkdir(f'test/{i}') valid_samples = random.sample(os.listdir(f'train/{i}'), 100) for j in valid_samples: shutil.move(f'train/{i}/{j}', f'valid/{i}') test_samples = random.sample(os.listdir(f'train/{i}'), 50) for k in test_samples: shutil.move(f'train/{i}/{k}', f'test/{i}') os.chdir('../..') target_size = (200, 200) target_dims = (200, 200, 3) # add channel for RGB n_classes = 29 val_frac = 0.1 batch_size = 64 data_augmentor = ImageDataGenerator(samplewise_center=True, samplewise_std_normalization=True, validation_split=val_frac) train_path = "/content/asl_alphabet/train" valid_path = "/content/asl_alphabet/valid" test_path = "/content/asl_alphabet/test" train_batches = data_augmentor.flow_from_directory( directory=train_path, target_size=(224,224), batch_size=128) valid_batches = data_augmentor.flow_from_directory( directory=valid_path, target_size=(224,224), batch_size=64) test_batches = ImageDataGenerator(samplewise_center=True, samplewise_std_normalization=True).flow_from_directory( directory=test_path, target_size=(224,224), batch_size=64, shuffle=False) my_model = Sequential() my_model.add(Conv2D(64, kernel_size=4, strides=1, activation='relu', input_shape=(224,224,3))) my_model.add(Conv2D(64, kernel_size=4, strides=2, activation='relu')) my_model.add(Dropout(0.5)) my_model.add(Conv2D(128, kernel_size=4, strides=1, activation='relu')) my_model.add(Conv2D(128, kernel_size=4, strides=2, activation='relu')) my_model.add(Dropout(0.5)) my_model.add(Conv2D(256, kernel_size=4, strides=1, activation='relu')) my_model.add(Conv2D(256, kernel_size=4, strides=2, activation='relu')) my_model.add(Flatten()) my_model.add(Dropout(0.5)) my_model.add(Dense(512, activation='relu')) my_model.add(Dense(n_classes, activation='softmax')) my_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"]) my_model.summary() history = my_model.fit_generator(train_batches, epochs=5, validation_data=valid_batches) print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() my_model.evaluate(test_batches) my_model.save('saved_model/my_model') new_model = tf.keras.models.load_model('saved_model/my_model') new_model.summary() aa = test_batches.class_indices.keys() test_labels = test_batches.classes predictions = my_model.predict(x=test_batches, steps=len(test_batches), verbose=0) cm = confusion_matrix(y_true=test_labels, y_pred=predictions.argmax(axis=1)) aa import seaborn as sn import pandas as pd df_cm = pd.DataFrame(cm, index = [i for i in aa],columns = [i for i in aa]) plt.figure(figsize = (10,7)) sn.heatmap(df_cm, annot=True) import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential,Model from tensorflow.keras.layers import Activation, Dense from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import categorical_crossentropy all = os.listdir("/content/asl_alphabet") os.chdir("/content/asl_alphabet") if os.path.isdir('train') is False: os.mkdir('train') os.mkdir('valid') os.mkdir('test') for i in all: shutil.move(f'{i}', 'train') os.mkdir(f'valid/{i}') os.mkdir(f'test/{i}') valid_samples = random.sample(os.listdir(f'train/{i}'), 300) for j in valid_samples: shutil.move(f'train/{i}/{j}', f'valid/{i}') test_samples = random.sample(os.listdir(f'train/{i}'), 100) for k in test_samples: shutil.move(f'train/{i}/{k}', f'test/{i}') os.chdir('../..') train_path = "/content/asl_alphabet/train" valid_path = "/content/asl_alphabet/valid" test_path = "/content/asl_alphabet/test" train_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=train_path, target_size=(224,224), batch_size=256) valid_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=valid_path, target_size=(224,224), batch_size=64) test_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=test_path, target_size=(224,224), batch_size=64, shuffle=False) mobile = tf.keras.applications.mobilenet.MobileNet() mobile.summary() x = mobile.layers[-6].output output = Dense(units=29, activation='softmax')(x) model = Model(inputs=mobile.input, outputs=output) for layer in model.layers[:-23]: layer.trainable = False model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=train_batches, steps_per_epoch=len(train_batches), validation_data=valid_batches, validation_steps=len(valid_batches), epochs=10, verbose=1 ) print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['accuracy'][1:]) plt.plot(history.history['val_accuracy'][1:]) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss'][1:]) plt.plot(history.history['val_loss'][1:]) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') test_labels = test_batches.classes predictions = model.predict(x=test_batches, steps=len(test_batches), verbose=0) cm = confusion_matrix(y_true=test_labels, y_pred=predictions.argmax(axis=1)) import seaborn as sn import pandas as pd df_cm = pd.DataFrame(cm, index = [i for i in aa],columns = [i for i in aa]) plt.figure(figsize = (10,7)) sn.heatmap(df_cm, annot=True) def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.figure(figsize=(10,10)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') test_batches.class_indices plot_confusion_matrix(cm=cm, classes=cm_plot_labels, title='Confusion Matrix') img_file = cv2.imread("/content/2021-05-29-224054.jpg") img_file = skimage.transform.resize(img_file, (224, 224, 3)) img_arr = np.asarray(img_file) #img_arr = np.fliplr(img_arr) plt.imshow(img_arr) img_arr = img_arr.reshape(-1,224,224,3) ans = model.predict(img_arr) np.argmax(ans) import skimage from skimage.transform import resize %cd /content/ model.save('final_model') %cd .. from google.colab import drive drive.mount('/content/gdrive') # this creates a symbolic link so that now the path /content/gdrive/My\ Drive/ is equal to /mydrive !ln -s /content/gdrive/My\ Drive/ /mydrive !ls /mydrive !cp -r /content/final_model/ /mydrive/ # img = cv2.imread("./asl/asl_alphabet_test/asl_alphabet_test/A_test.jpg") # print(img.shape) imgs,labels = next(val_generator) print([np.argmax(i) for i in new_model.predict(imgs)]) print([np.argmax(i) for i in labels]) getSizedFrame() import cv2 import numpy as np img = cv2.imread("./test1.jpg") cv2.imshow("img",img) cv2.waitKey(0) cv2.destroyAllWindows() x=469 y=37 h=400 w=400 crop_img = img[y:y+h, x:x+w] crop_img = np.fliplr(crop_img) cv2.imshow("cropped", crop_img) cv2.waitKey(0) cv2.destroyAllWindows() dim = (64, 64) resized = cv2.resize(crop_img, dim, interpolation = cv2.INTER_AREA) resized = resized.reshape(-1,64,64,3) np.argmax(new_model.predict(resized)) ###Output _____no_output_____ ###Markdown ###Code import keras import numpy as np import pandas as pd import cv2 from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D, Dense,Flatten from keras.datasets import mnist import matplotlib.pyplot as plt from keras.utils import np_utils from keras.optimizers import SGD !pip install PyDrive import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) download = drive.CreateFile({'id': '1wG0gS-bqjV6yz1YveuxkvHT5_2DOuT05'}) download.GetContentFile('train.csv') train = pd.read_csv('train.csv') download = drive.CreateFile({'id': '1q_Zwlu3RncjKq1YpiVtkiMPxIIueGRYB'}) download.GetContentFile('test.csv') test = pd.read_csv('test.csv') display(train.info()) display(test.info()) display(train.head(n = 2)) display(test.head(n = 2)) train_Y = train['label'] test_Y = test['label'] train_X = train.drop(['label'],axis = 1) test_X = test.drop(['label'],axis = 1) train_X = train_X.astype('float32') / 255 test_X = test_X.astype('float32')/255 display(train_Y) train_X = train_X.values.reshape(27455,784) test_X = test_X.values.reshape(7172,784) train_Y = keras.utils.to_categorical(train_Y,26) test_Y = keras.utils.to_categorical(test_Y,26) model = Sequential() model.add(Dense(units=128,activation="relu",input_shape=(784,))) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=26,activation="softmax")) model.compile(optimizer=SGD(0.001),loss="categorical_crossentropy",metrics=["accuracy"]) model.fit(train_X,train_Y,batch_size=32,epochs=100,verbose=1) accuracy = model.evaluate(x=test_X,y=test_Y,batch_size=32) print("Accuracy: ",accuracy[1]) img = test_X[1] test_img = img.reshape((1,784)) img_class = model.predict_classes(test_img) prediction = img_class[0] classname = img_class[0] print("Class: ",classname) img = img.reshape((28,28)) plt.imshow(img) plt.title(classname) plt.show() ###Output _____no_output_____ ###Markdown ###Code import keras import numpy as np import pandas as pd import cv2 from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D, Dense,Flatten from keras.datasets import mnist import matplotlib.pyplot as plt from keras.utils import np_utils from keras.optimizers import SGD !pip install PyDrive import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) download = drive.CreateFile({'id': '1wG0gS-bqjV6yz1YveuxkvHT5_2DOuT05'}) download.GetContentFile('train.csv') train = pd.read_csv('train.csv') download = drive.CreateFile({'id': '1q_Zwlu3RncjKq1YpiVtkiMPxIIueGRYB'}) download.GetContentFile('test.csv') test = pd.read_csv('test.csv') display(train.info()) display(test.info()) display(train.head(n = 2)) display(test.head(n = 2)) train_Y = train['label'] test_Y = test['label'] train_X = train.drop(['label'],axis = 1) test_X = test.drop(['label'],axis = 1) train_X = train_X.astype('float32') / 255 test_X = test_X.astype('float32')/255 display(train_Y) train_X = train_X.values.reshape(27455,784) test_X = test_X.values.reshape(7172,784) train_Y = keras.utils.to_categorical(train_Y,26) test_Y = keras.utils.to_categorical(test_Y,26) model = Sequential() model.add(Dense(units=128,activation="relu",input_shape=(784,))) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=26,activation="softmax")) model.compile(optimizer=SGD(0.001),loss="categorical_crossentropy",metrics=["accuracy"]) model.fit(train_X,train_Y,batch_size=32,epochs=100,verbose=1) accuracy = model.evaluate(x=test_X,y=test_Y,batch_size=32) print("Accuracy: ",accuracy[1]) img = test_X[1] test_img = img.reshape((1,784)) img_class = model.predict_classes(test_img) prediction = img_class[0] classname = img_class[0] print("Class: ",classname) img = img.reshape((28,28)) plt.imshow(img) plt.title(classname) plt.show() ###Output _____no_output_____
examples/upsampling3D/1_datagen.ipynb
###Markdown Demo: Training data generation for combined denoising and upsamling of synthetic 3D dataThis notebook demonstrates training data generation for a combined denoising and upsampling task of synthetic 3D data, where corresponding pairs of isotropic low and high quality stacks can be acquired.Anisotropic distortions along the Z axis will be simulated for the low quality stack, such that a CARE model trained on this data can be applied to images with anisotropic resolution along Z.We will use only a few synthetically generated stacks for training data generation, whereas in your application you should aim to use stacks from different developmental timepoints to ensure a well trained model. More documentation is available at http://csbdeep.bioimagecomputing.com/doc/. ###Code from __future__ import print_function, unicode_literals, absolute_import, division import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' from tifffile import imread from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict from csbdeep.io import save_training_data from csbdeep.data import RawData, create_patches from csbdeep.data.transform import anisotropic_distortions ###Output _____no_output_____ ###Markdown Download example dataFirst we download some example data, consisting of a synthetic 3D stacks with membrane-like structures. ###Code download_and_extract_zip_file ( url = 'http://csbdeep.bioimagecomputing.com/example_data/synthetic_upsampling.zip', targetdir = 'data', ) ###Output _____no_output_____ ###Markdown We plot XY and XZ slices of a training stack pair: ###Code y = imread('data/synthetic_upsampling/training_stacks/high/stack_00.tif') x = imread('data/synthetic_upsampling/training_stacks/low/stack_00.tif') print('image size =', x.shape) plt.figure(figsize=(16,15)) plot_some(np.stack([x[5],y[5]]), title_list=[['XY slice (low)','XY slice (high)']], pmin=2,pmax=99.8); plt.figure(figsize=(16,15)) plot_some(np.stack([np.moveaxis(x,1,0)[50],np.moveaxis(y,1,0)[50]]), title_list=[['XZ slice (low)','XZ slice (high)']], pmin=2,pmax=99.8); ###Output _____no_output_____ ###Markdown Generate training data for upsampling CAREWe first need to create a `RawData` object, which defines how to get the pairs of low/high SNR stacks and the semantics of each axis (e.g. which one is considered a color channel, etc.).Here we have two folders "low" and "high", where corresponding low and high-SNR stacks are TIFF images with identical filenames. For this case, we can simply use `RawData.from_folder` and set `axes = 'ZYX'` to indicate the semantic order of the image axes. ###Code raw_data = RawData.from_folder ( basepath = 'data/synthetic_upsampling/training_stacks', source_dirs = ['low'], target_dir = 'high', axes = 'ZYX', ) ###Output _____no_output_____ ###Markdown Furthermore, we must define how to modify the Z axis to mimic a real microscope as closely as possible if data along this axis is acquired with reduced resolution. To that end, we define a `Transform` object that will take our `RawData` as input and return the modified image. Here, we use `anisotropic_distortions` to accomplish this.The most important parameter is the subsampling factor along Z, which should for example be chosen as 4 if it is planned to later acquire (low-SNR) images with 4 times reduced axial resolution. ###Code anisotropic_transform = anisotropic_distortions ( subsample = 4, psf = None, subsample_axis = 'Z', yield_target = 'target', ) ###Output _____no_output_____ ###Markdown From the synthetically undersampled low quality input stack and its corresponding high quality stack, we now generate some 3D patches. As a general rule, use a patch size that is a power of two along XYZT, or at least divisible by 8. Typically, you should use more patches the more trainings stacks you have. By default, patches are sampled from non-background regions (i.e. that are above a relative threshold), see the documentation of `create_patches` for details.Note that returned values `(X, Y, XY_axes)` by `create_patches` are not to be confused with the image axes X and Y. By convention, the variable name `X` (or `x`) refers to an input variable for a machine learning model, whereas `Y` (or `y`) indicates an output variable. ###Code X, Y, XY_axes = create_patches ( raw_data = raw_data, patch_size = (32,64,64), n_patches_per_image = 512, transforms = [anisotropic_transform], save_file = 'data/my_training_data.npz', ) assert X.shape == Y.shape print("shape of X,Y =", X.shape) print("axes of X,Y =", XY_axes) ###Output _____no_output_____ ###Markdown ShowThis shows a ZY slice of some of the generated patch pairs (even rows: *source*, odd rows: *target*) ###Code for i in range(2): plt.figure(figsize=(16,2)) sl = slice(8*i, 8*(i+1)), slice(None), slice(None), 0 plot_some(X[sl],Y[sl],title_list=[np.arange(sl[0].start,sl[0].stop)]) plt.show() None; ###Output _____no_output_____ ###Markdown Demo: Training data generation for combined denoising and upsamling of synthetic 3D dataThis notebook demonstrates training data generation for a combined denoising and upsampling task of synthetic 3D data, where corresponding pairs of isotropic low and high quality stacks can be acquired.Anisotropic distortions along the Z axis will be simulated for the low quality stack, such that a CARE model trained on this data can be applied to images with anisotropic resolution along Z.We will use only a few synthetically generated stacks for training data generation, whereas in your application you should aim to use stacks from different developmental timepoints to ensure a well trained model. More documentation is available at http://csbdeep.bioimagecomputing.com/doc/. ###Code from __future__ import print_function, unicode_literals, absolute_import, division import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' from tifffile import imread from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict from csbdeep.io import save_training_data from csbdeep.data import RawData, create_patches from csbdeep.data.transform import anisotropic_distortions ###Output _____no_output_____ ###Markdown Download example dataFirst we download some example data, consisting of a synthetic 3D stacks with membrane-like structures. ###Code download_and_extract_zip_file ( url = 'http://csbdeep.bioimagecomputing.com/example_data/synthetic_upsampling.zip', targetdir = 'data', ) ###Output _____no_output_____ ###Markdown We plot XY and XZ slices of a training stack pair: ###Code y = imread('data/synthetic_upsampling/training_stacks/high/stack_00.tif') x = imread('data/synthetic_upsampling/training_stacks/low/stack_00.tif') print('image size =', x.shape) plt.figure(figsize=(16,15)) plot_some(np.stack([x[5],y[5]]), title_list=[['XY slice (low)','XY slice (high)']], pmin=2,pmax=99.8); plt.figure(figsize=(16,15)) plot_some(np.stack([np.moveaxis(x,1,0)[50],np.moveaxis(y,1,0)[50]]), title_list=[['XZ slice (low)','XZ slice (high)']], pmin=2,pmax=99.8); ###Output _____no_output_____ ###Markdown Generate training data for upsampling CAREWe first need to create a `RawData` object, which defines how to get the pairs of low/high SNR stacks and the semantics of each axis (e.g. which one is considered a color channel, etc.).Here we have two folders "low" and "high", where corresponding low and high-SNR stacks are TIFF images with identical filenames. For this case, we can simply use `RawData.from_folder` and set `axes = 'ZYX'` to indicate the semantic order of the image axes. ###Code raw_data = RawData.from_folder ( basepath = 'data/synthetic_upsampling/training_stacks', source_dirs = ['low'], target_dir = 'high', axes = 'ZYX', ) ###Output _____no_output_____ ###Markdown Furthermore, we must define how to modify the Z axis to mimic a real microscope as closely as possible if data along this axis is acquired with reduced resolution. To that end, we define a `Transform` object that will take our `RawData` as input and return the modified image. Here, we use `anisotropic_distortions` to accomplish this.The most important parameter is the subsampling factor along Z, which should for example be chosen as 4 if it is planned to later acquire (low-SNR) images with 4 times reduced axial resolution. ###Code anisotropic_transform = anisotropic_distortions ( subsample = 4, psf = None, subsample_axis = 'Z', yield_target = 'target', ) ###Output _____no_output_____ ###Markdown From the synthetically undersampled low quality input stack and its corresponding high quality stack, we now generate some 3D patches. As a general rule, use a patch size that is a power of two along XYZT, or at least divisible by 8. Typically, you should use more patches the more trainings stacks you have. By default, patches are sampled from non-background regions (i.e. that are above a relative threshold), see the documentation of `create_patches` for details.Note that returned values `(X, Y, XY_axes)` by `create_patches` are not to be confused with the image axes X and Y. By convention, the variable name `X` (or `x`) refers to an input variable for a machine learning model, whereas `Y` (or `y`) indicates an output variable. ###Code X, Y, XY_axes = create_patches ( raw_data = raw_data, patch_size = (32,64,64), n_patches_per_image = 512, transforms = [anisotropic_transform], save_file = 'data/my_training_data.npz', ) assert X.shape == Y.shape print("shape of X,Y =", X.shape) print("axes of X,Y =", XY_axes) ###Output _____no_output_____ ###Markdown ShowThis shows a ZY slice of some of the generated patch pairs (odd rows: *source*, even rows: *target*) ###Code for i in range(2): plt.figure(figsize=(16,2)) sl = slice(8*i, 8*(i+1)), slice(None), slice(None), 0 plot_some(X[sl],Y[sl],title_list=[np.arange(sl[0].start,sl[0].stop)]) plt.show() None; ###Output _____no_output_____
Notebooks/Facebook_impression_classification.ipynb
###Markdown Classifying Facebook post impressions Nicholas Lines This notebook is my work for Module 8 of EN.605.633.81.SP21 Social Media Analytics. The problem prompt is > Design a supervised machine learning classifier that will predict whether the impression of Facebook posts will be greater than 1,000 based on data such as the type of post, weekday or weekend, and weather. Implement the classifier and test it using Python. The data corpus contains data from Facebook on recent posts for a bicycle shop. The columns are Post Message, Message Type, Date and Time of Post, Number of Impression, Weather indicator for snow or rain, and weekday or weekend indicator for the post. > Test your classifier and calculate precision and recall. Turn in Python code for the classifier The data used was provided by the instructor. Environment setup I'll try to use `scikit-learn` built-in functions and structures wherever possible for consistency and ease of review. ###Code %pylab inline import pandas as pd import os from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import VotingClassifier from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import average_precision_score from sklearn.metrics import precision_recall_curve from sklearn.metrics import plot_precision_recall_curve from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import precision_score, recall_score if 'COLAB_GPU' in os.environ: # a hacky way of determining if you are in colab. print("Notebook is running in colab") from google.colab import drive drive.mount("/content/drive") DATA_DIR = "./drive/My Drive/Data/" else: # Get the system information from the OS PLATFORM_SYSTEM = platform.system() # Darwin is macOS if PLATFORM_SYSTEM == "Darwin": EXECUTABLE_PATH = Path("../dependencies/chromedriver") elif PLATFORM_SYSTEM == "Windows": EXECUTABLE_PATH = Path("../dependencies/chromedriver.exe") else: logging.critical("Chromedriver not found or Chromedriver is outdated...") exit() DATA_DIR = "../Data/" ###Output Notebook is running in colab Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True). ###Markdown Data read-in and feature review ###Code df = pd.read_excel(DATA_DIR + "raw/SocialMediaInsightsforMachineLearning.xlsm", parse_dates=True) df = df.drop(101) df print(f"There are {df.shape[0]} data points and {df.shape[1]} columns") df.Impressions.hist(label="impressions"); legend(); df.Posted.hist(label="posts by date") legend(); ###Output _____no_output_____ ###Markdown Let's do some gentle feature engineering, using one-hot or dummy encoding to represent categorical data, and extracting a few features from the post body, extract time of day from the post time, etc. ###Code df['num_hashtags'] = df["Post Message"].str.count("#") # check if the hash symbol is used df['post_length'] = df["Post Message"].str.len() # get the length of the string df['contains_link'] = (df["Post Message"].str.count("http") + df["Post Message"].str.count("www")).astype(bool).astype(int) # check if http or www are used in the string df['Time'] = df.Posted.dt.hour * 60 + df.Posted.dt.minute Types_df = pd.get_dummies(df.Type, prefix='Type') Weather_df = pd.get_dummies(df.Weather, prefix='Weather') Weekend_df = pd.get_dummies(df.Weekend, prefix='Weekend', drop_first=True) df = pd.concat([df, Types_df, Weather_df, Weekend_df], axis=1) df.head() ###Output _____no_output_____ ###Markdown Now that we have some numeric features to work with, let's just check which are most strongly corrolated with Impressions: ###Code df.corr() matshow(df.corr()); pd.plotting.scatter_matrix(df, figsize=(20,20)); ###Output _____no_output_____ ###Markdown It is clear that we want to include post type data in our decision, as well as post length and time of posting, as a minimum. Setting up training/test data ###Code X = df[["num_hashtags", "post_length", "Time", "Type_Link", "Type_Photo", "Type_SharedVideo", "Type_Status", "Type_Video", "Weather_Rain", "Weather_Snow", "Weather_Thunder", "Weekend_Y"]].to_numpy() y = (df[["Impressions"]] > 1000).astype(int).to_numpy().flatten() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42) ###Output _____no_output_____ ###Markdown Train and test classifiers I'll throw a bunch of non-optimized standard classifiers at the data. In particular, I'll try Gradient boosting, Random forests, a Multi-layered Perceptron, a Support Vector Machine, K-nearest neighbors, and an ensemble. ###Code clf0 = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0).fit(X_train, y_train) clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(n_estimators=100, random_state=1) clf3 = GaussianNB() clf4 = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) clf5 = SVC() clf6 = DecisionTreeClassifier() clf7 = KNeighborsClassifier() eclf = VotingClassifier( estimators=[ #('gb', clf0), #('lr', clf1), ('rf', clf2), ('gnb', clf3), ('mlp', clf4), ('svm', clf5), ('dt', clf6), ], voting='hard') for clf, label in zip([ clf0, #clf1, clf2, clf3, clf4, clf5, clf6, clf7, eclf, ], [ 'Gradient Boosting', #'Logistic Regression', 'Random Forest', 'naive Bayes', 'Multi Layered Perceptron', 'Support Vector Machine', 'Decision Tree Classifier', 'K Nearest Neighbors Classifier', 'Ensemble']): clf.fit(X_train, y_train) plot_confusion_matrix(clf, X_test, y_test) plt.title(label) scores = cross_val_score(clf, X_test, y_test, scoring='accuracy', cv=5) print(label + ": ") print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) if not label=="Ensemble": # the decision function isn't defined in that case y_pred = clf.predict(X_test) print(f"Precision score: {precision_score(y_test, y_pred)}") print(f"Recall score: {recall_score(y_test, y_pred)}") average_precision = average_precision_score(y_test, y_pred) print('Average precision-recall score: {0:0.2f}'.format(average_precision)) disp = plot_precision_recall_curve(clf, X_test, y_test) disp.ax_.set_title(label + ' 2-class Precision-Recall curve: ' 'AP={0:0.2f}'.format(average_precision)) print("\n") ###Output Gradient Boosting: Accuracy: 0.52 (+/- 0.19) [Gradient Boosting] Precision score: 0.4 Recall score: 0.13333333333333333 Average precision-recall score: 0.37 Random Forest: Accuracy: 0.66 (+/- 0.12) [Random Forest] Precision score: 0.6666666666666666 Recall score: 0.13333333333333333 Average precision-recall score: 0.41 naive Bayes: Accuracy: 0.59 (+/- 0.11) [naive Bayes] Precision score: 0.4782608695652174 Recall score: 0.7333333333333333 Average precision-recall score: 0.45 Multi Layered Perceptron: Accuracy: 0.63 (+/- 0.02) [Multi Layered Perceptron] Precision score: 0.0 Recall score: 0.0 Average precision-recall score: 0.37
EDA/EDA_ON_Dataset1_final.ipynb
###Markdown EDA on dataset - 1 ###Code #importing the required libraries import matplotlib.pyplot as plt import pandas as pd import numpy as np from scipy.fft import fft,fftfreq from sklearn.preprocessing import StandardScaler #loading the dataset using pandas data1 = pd.read_csv("../Dataset/Train/Voltage_L1_DataSet1.csv") out1 = pd.read_csv("../Dataset/Train/OutputFor_DataSet1.csv") data2 = pd.read_csv("../Dataset/Test/Voltage_L1_DataSet2.csv") out2 = pd.read_csv("../Dataset/Test/OutputFor_DataSet2.csv") print("data1",data1.shape) print("out1",out1.shape) print("data2",data2.shape) print("out2",out2.shape) ###Output data1 (11899, 128) out1 (5999, 1) data2 (5999, 128) out2 (5999, 1) ###Markdown Data Preprocessing This segment of notebook contains all the preprocessing steps which are performed on the data. Data cleaning ###Code #dropna() function is used to remove all those rows which contains NA values data1.dropna(axis=0,inplace=True) #shape of the data frame after dropping the rows containing NA values data1.shape #here we are constructing the array which will finally contain the column names header =[] for i in range(1,data1.shape[1]+1): header.append("Col"+str(i)) #assigning the column name array to the respectinve dataframes data1.columns = header data2.columns = header data1.head() data2.head() #now we are combining the two dataframes to make a final dataframe data = data1.append(data2, ignore_index = True) data.head() data.shape #here we are giving a name to the output column header_out = ["output"] out1.columns = header_out out2.columns = header_out out2.head() #now we are combining the output columns output = out1.append(out2, ignore_index = True) output.head() output.shape #now we are appending the output column to the original dataframe which contains the power signals data['output'] = output data.head() data_arr = data.to_numpy() transform = StandardScaler() data_norm = transform.fit_transform(data_arr) data_norm_fft = data_arr.copy() n = data_norm_fft.shape[0] for i in range(0,n): data_norm_fft[i] = np.append(np.abs(fft(data_norm_fft[i][0:128])),data_norm_fft[i][128]) transform = StandardScaler() data_norm_fft = transform.fit_transform(data_norm_fft) data_arr.shape print("class", "Normal wave") fig, axes = plt.subplots(2, 2,figsize=(16,8)) #raw data axes[0][0].plot([i for i in range(1,129)], data_arr[0][0:128]) axes[0][0].title.set_text('Raw_data') #with fft yf = fft(data_arr[0][0:128]) xf = fftfreq(128,1/128) axes[0][1].plot(xf, np.abs(yf)) axes[0][1].title.set_text('With_FFT') #after normalization axes[1][0].plot([i for i in range(1,129)], data_norm[0][0:128]) axes[1][0].title.set_text('With_Norm') #with normalization and fft axes[1][1].plot([i for i in range(1,129)], data_norm_fft[0][0:128]) axes[1][1].title.set_text('With_NormAndFFT') print("class", "3rd harmonic wave") fig, axes = plt.subplots(2, 2,figsize=(16,8)) #raw data axes[0][0].plot([i for i in range(1,129)], data_arr[1][0:128]) axes[0][0].title.set_text('Raw_data') #with fft yf = fft(data_arr[1][0:128]) xf = fftfreq(128,1/128) axes[0][1].plot(xf, np.abs(yf)) axes[0][1].title.set_text('With_FFT') #after normalization axes[1][0].plot([i for i in range(1,129)], data_norm[1][0:128]) axes[1][0].title.set_text('With_Norm') #with normalization and fft axes[1][1].plot([i for i in range(1,129)], data_norm_fft[1][0:128]) axes[1][1].title.set_text('With_NormAndFFT') print("class", "5th harmonic wave") fig, axes = plt.subplots(2, 2,figsize=(16,8)) #raw data axes[0][0].plot([i for i in range(1,129)], data_arr[3][0:128]) axes[0][0].title.set_text('Raw_data') #with fft yf = fft(data_arr[3][0:128]) xf = fftfreq(128,1/128) axes[0][1].plot(xf, np.abs(yf)) axes[0][1].title.set_text('With_FFT') #after normalization axes[1][0].plot([i for i in range(1,129)], data_norm[3][0:128]) axes[1][0].title.set_text('With_Norm') #with normalization and fft axes[1][1].plot([i for i in range(1,129)], data_norm_fft[3][0:128]) axes[1][1].title.set_text('With_NormAndFFT') print("class", "Voltage dip") fig, axes = plt.subplots(2, 2,figsize=(16,8)) #raw data axes[0][0].plot([i for i in range(1,129)], data_arr[6][0:128]) axes[0][0].title.set_text('Raw_data') #with fft yf = fft(data_arr[6][0:128]) xf = fftfreq(128,1/128) axes[0][1].plot(xf, np.abs(yf)) axes[0][1].title.set_text('With_FFT') #after normalization axes[1][0].plot([i for i in range(1,129)], data_norm[6][0:128]) axes[1][0].title.set_text('With_Norm') #with normalization and fft axes[1][1].plot([i for i in range(1,129)], data_norm_fft[6][0:128]) axes[1][1].title.set_text('With_NormAndFFT') print("class", "Transient wave") fig, axes = plt.subplots(2, 2,figsize=(16,8)) #raw data axes[0][0].plot([i for i in range(1,129)], data_arr[8][0:128]) axes[0][0].title.set_text('Raw_data') #with fft yf = fft(data_arr[8][0:128]) xf = fftfreq(128,1/128) axes[0][1].plot(xf, np.abs(yf)) axes[0][1].title.set_text('With_FFT') #after normalization axes[1][0].plot([i for i in range(1,129)], data_norm[8][0:128]) axes[1][0].title.set_text('With_Norm') #with normalization and fft axes[1][1].plot([i for i in range(1,129)], data_norm_fft[8][0:128]) axes[1][1].title.set_text('With_NormAndFFT') ###Output class Transient wave
Random Forest/.ipynb_checkpoints/DOPE_Random_Forest_Classification_Model_-Final-checkpoint.ipynb
###Markdown Initialize data set ###Code # Select data elements sql_cols = ('federal_action_obligation, ' #'total_dollars_obligated, ' 'base_and_exercised_options_value, ' 'base_and_all_options_value, ' #'awarding_sub_agency_name, ' 'awarding_sub_agency_code, ' #'awarding_office_name, ' 'awarding_office_code, ' #'funding_sub_agency_name, ' 'funding_sub_agency_code, ' #'funding_office_name, ' too many NaN 'primary_place_of_performance_state_code, ' 'award_or_idv_flag, ' #'award_type, ' 'award_type_code, ' #'type_of_contract_pricing, ' 'type_of_contract_pricing_code, ' #'dod_claimant_program_description, ' 'dod_claimant_program_code, ' 'type_of_set_aside_code, ' #'multi_year_contract, ' too many NaN #'dod_acquisition_program_description, ' too many NaN #'subcontracting_plan, ' too many NaN #'contract_bundling, ' 'contract_bundling_code, ' #'evaluated_preference, ' too many NaN #'national_interest_action, ' 'national_interest_action_code, ' #'cost_or_pricing_data, ' too many NaN #'gfe_gfp, ' 'gfe_gfp_code, ' #'contract_financing, ' 'contract_financing_code, ' 'portfolio_group, ' #'product_or_service_code_description, ' 'product_or_service_code, ' #'naics_bucket_title, ' too many NaN #'naics_description' 'naics_code' ) # Create dataframe sql_tbl_name = 'consolidated_data2' df = pd.read_sql_query('SELECT ' + sql_cols + ' FROM ' + sql_tbl_name, con=conn) print('Shape of initial df:', df.shape) # Drop rows with NaN values df = df.dropna() print('Shape with no NaN values:', df.shape) # Create two columns for set-aside (0/1) and contract value def set_aside(c): if c['type_of_set_aside_code'] == 'NONE': return 0 else: return 1 def contract_value(c): if c['base_and_exercised_options_value'] > 0: return c['base_and_exercised_options_value'] elif c['base_and_all_options_value'] > 0: return c['base_and_all_options_value'] elif c['federal_action_obligation'] > 0: return c['federal_action_obligation'] else: return 0 df['set_aside'] = df.apply(set_aside, axis=1) df['contract_value'] = df.apply(contract_value, axis=1) # Drop columns that are no longer needed df = df.drop(['type_of_set_aside_code','base_and_exercised_options_value','base_and_all_options_value', 'federal_action_obligation'], axis=1) ###Output _____no_output_____ ###Markdown Feature Selection Initialize Model ###Code # Create feature and target dataframes X_int = df.drop(['set_aside'], axis = 1) y = df['set_aside'] # One hot encoding for features X_int = pd.get_dummies(X_int) print('Shape of OHE feature df:', X_int.shape) # Import Random Forest Classifier modules from sklearn.model_selection import train_test_split, cross_val_score from yellowbrick.classifier import ClassificationReport from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, recall_score, precision_score, classification_report # Fit initial model X_train, X_test, y_train, y_test = train_test_split(X_int, y, test_size=0.20, random_state=42) model = RandomForestClassifier(n_estimators=17, n_jobs=-1, random_state=0) model.fit(X_train, y_train) print('Model Accuracy: {:.2%}'.format(model.score(X_test, y_test))) ###Output Model Accuracy: 91.40% ###Markdown Find Important Features ###Code # Calcultae feature importance feature_importances = pd.DataFrame(model.feature_importances_, index = X_train.columns, columns=['importance']).sort_values('importance', ascending=False) # Sort descending important features... # Calculate cumalative percentage of total importance... # Only keep features accounting for top 80% of feature importance feature_importances['cumpercent'] = feature_importances['importance'].cumsum()/feature_importances['importance'].sum()*100 relevant_features = feature_importances[feature_importances.cumpercent < 80] print('Shape of relevant features:', relevant_features.shape) # Create list of relevant features to create new dataframe with only relevant features list_relevant_features = list(relevant_features.index) X = X_int[list_relevant_features] print('Shape of initialized feature dataframe X with only relevant features:', X.shape) # Test accuracy of initialized dataframe X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42) model = RandomForestClassifier(n_estimators=17, n_jobs=-1, random_state=0) model.fit(X_train, y_train) print('Model Accuracy: {:.2%}'.format(model.score(X_test, y_test))) predictions = model.predict(X_test) ###Output _____no_output_____ ###Markdown Perform Random Forest Classification Using only relevant features dataframe ###Code classes = ['None', 'Set Aside'] visualizer = ClassificationReport(model, classes=classes, support=True) visualizer.score(X_test, y_test) visualizer.show() model_score_f1 = cross_val_score(estimator=model, X=X, y=y, scoring='f1', cv=12) model_score_precision = cross_val_score(estimator=model, X=X, y=y, scoring='precision', cv=12) model_score_recall = cross_val_score(estimator=model, X=X, y=y, scoring='recall', cv=12) print("Accuracy : ", round(model_score_f1.mean(),3)) print('Standard Deviation : ',round(model_score_f1.std(),3)) print('Precision : ', round(model_score_precision.mean(),3)) print('Recall : ', round(model_score_recall.mean(),3)) print('Confusion Matrix') print(confusion_matrix(y_test, predictions)) print(classification_report(y_test, predictions)) import pickle #Save Trained Model filename = 'RandomForest_SetAside_None_Model.save' pickle.dump(model, open(filename, 'wb')) ###Output _____no_output_____ ###Markdown Perform Second Model - Predict Type of Set-Aside ###Code df1 = pd.read_sql_query('SELECT ' + sql_cols + ' FROM ' + sql_tbl_name, con=conn) print('Shape of initial df:', df1.shape) # Drop all instances where type_of_set_aside_code = NONE none_set_asides = df1[df1['type_of_set_aside_code'] == 'NONE'].index df1 = df1.drop(none_set_asides, axis=0) print('Shape of dataframe WITH set-asides:', df1.shape) # Create column for contract value def contract_value(c): if c['base_and_exercised_options_value'] > 0: return c['base_and_exercised_options_value'] elif c['base_and_all_options_value'] > 0: return c['base_and_all_options_value'] elif c['federal_action_obligation'] > 0: return c['federal_action_obligation'] else: return 0 df1['contract_value'] = df1.apply(contract_value, axis=1) # Assign numerics to set-aside codes df1['set_aside_number'] = df1['type_of_set_aside_code'].map({'SBA':1, '8AN':2, '8A':3, 'SDVOSBC':4,'HZC':5, 'WOSB':6, 'SBP':7, 'EDWOSB':7, 'SDVOSBS':7, 'HZS':7, 'WOSBSS':7, 'EDWOSBSS':7, 'ISBEE':7, 'HS3':7, 'IEE':7}) # Drop columns that are no longer needed df1 = df1.drop(['type_of_set_aside_code','base_and_exercised_options_value','base_and_all_options_value', 'federal_action_obligation'], axis=1) df1 = df1.dropna() print('Shape of dataframe WITH set-asides with no NaN values:', df1.shape) X1 = df1.drop(['set_aside_number'], axis=1).copy() print('Shape of originial X1 dataframe:', X1.shape) # One hot encoding X1 = pd.get_dummies(X1) # Create a list of relevant features in X1 based on the list of previous relevant features from feature selection # Note numpy is taking only relevant features from the first feature selection that are also in X1 cols = list(X1.columns) updated_list_relevant_features = np.asarray(list_relevant_features)[np.in1d(list_relevant_features, cols)].tolist() # Updated dummy table with only relevant features X1 = X1[updated_list_relevant_features] print('Shape of X1 dummy dataframe:', X1.shape) y1 = df1['set_aside_number'].copy() df1.head() y1.value_counts() X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=0.20, random_state=42) model1 = RandomForestClassifier(n_estimators=17) model1.fit(X1_train, y1_train) X1_train.shape classes1 = ['SBA', '8AN', '8A', 'SDVOSBC','HZC', 'WOSB', 'OTHER SET ASIDE'] visualizer = ClassificationReport(model1, classes=classes1, support=True) visualizer.score(X1_test, y1_test) visualizer.show() predictions_all_set_aside = model1.predict(X1_test) print(classification_report(y1_test, predictions_all_set_aside)) X2 = pd.DataFrame(data=X1, columns=X1.columns) X2.head() X2['set_aside_number'] = y1 X2.head() # Next I am testing the accuracy of the model on each specific set aside. Because we have an unbalanced data set # it seems that the model is great for predicting set asides in general, however it is also skewed to better # predict certain categories compared to others. # Create a dictionary object to capture set aside code and it's score class scores(dict): # __init__ function def __init__(self): self = dict() # Function to add key:value def add(self, key, value): self[key] = value scores = scores() percent = '' set_aside_codes = X2['set_aside_number'].unique() print(set_aside_codes) # Loop through each set aside, test it, and append to the dictionary for set_aside in set_aside_codes: dataPoint = X2.loc[X2['set_aside_number'] == set_aside] XPoint = dataPoint.drop(['set_aside_number'],axis=1) yPoint = dataPoint['set_aside_number'] percent = model1.score(XPoint, yPoint) percent = round(percent, 4) scores.add(set_aside, percent) # Sort the dictionary by score import operator sortedScores = sorted(scores.items(), key=operator.itemgetter(1)) # Print scores for score in reversed(sortedScores): print("{:<8} {:.2%}".format(score[0], score[1])) model_score_all_set_aside_f1 = cross_val_score(estimator=model1, X=X1, y=y1, scoring='f1_weighted', cv=12) model_score_all_set_aside_precision = cross_val_score(estimator=model1, X=X1, y=y1, scoring='precision_weighted', cv=12) model_score_all_set_aside_recall = cross_val_score(estimator=model1, X=X1, y=y1, scoring='recall_weighted', cv=12) print("Accuracy : ", round(model_score_all_set_aside_f1.mean(),2)) print('Standard Deviation : ',round(model_score_all_set_aside_f1.std(),3)) print('Precision : ', round(model_score_all_set_aside_precision.mean(),3)) print('Recall : ', round(model_score_all_set_aside_recall.mean(),3)) print("") print('Confusion Matrix') print(confusion_matrix(y1_test, predictions_all_set_aside)) #Save Trained Model filename = 'RandomForest_All_Set_Aside_Model.save' pickle.dump(model, open(filename, 'wb')) ###Output _____no_output_____
data/.ipynb_checkpoints/ETL Pipeline Preparation-checkpoint.ipynb
###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code # import libraries import pandas as pd from sqlalchemy import create_engine # load messages dataset messages = pd.read_csv('messages.csv') messages.head() # load categories dataset categories = pd.read_csv('categories.csv') categories.head() ###Output _____no_output_____ ###Markdown 2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code # merge datasets df = messages.merge(categories) df.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code # create a dataframe of the 36 individual category columns categories = pd.DataFrame(list(df.categories.apply(lambda x: x.split(';')))) categories.columns = [n.split('-')[0] for n in categories.iloc[0:1].values[0]] for column in categories: categories[column] = categories[column].apply(lambda x: x.split('-')[1]) categories = categories.apply(pd.to_numeric) categories.head() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames. ###Code # drop the original categories column from `df` df.drop('categories' ,inplace=True, axis =1) df.head() # concatenate the original dataframe with the new `categories` dataframe df = pd.concat( (df, categories), axis = 1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed. ###Code # check number of duplicates print(f'Number of duplicates: {len(df[df.duplicated()])}') print(f'Size of the original dataframe: {len(df)}') # drop duplicates df = df.drop_duplicates() # check number of duplicates print(f'Number of duplicates: {len(df[df.duplicated()])}') ###Output Number of duplicates: 0 ###Markdown 7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code engine = create_engine('sqlite:///InsertDatabaseName.db') df.to_sql('InsertTableName', engine, index=False) ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code # import libraries import pandas as pd from sqlalchemy import create_engine # load messages dataset messages = pd.read_csv("disaster_messages.csv") messages.shape # load categories dataset categories = pd.read_csv("disaster_categories.csv") categories.shape ###Output _____no_output_____ ###Markdown 2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code # merge datasets df = messages.merge(categories,on = 'id') df.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code # create a dataframe of the 36 individual category columns categories = df["categories"].str.split(";",expand=True) categories.head() # select the first row of the categories dataframe row = categories.iloc[0] # use this row to extract a list of new column names for categories. # one way is to apply a lambda function that takes everything # up to the second to last character of each string with slicing category_colnames = row.apply(lambda x:x[:-2]) print(category_colnames) # rename the columns of `categories` categories.columns = category_colnames categories.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`. ###Code for column in categories: # set each value to be the last character of the string categories[column] = categories[column].astype(str).str[-1:] # convert column from string to numeric categories[column] = categories[column].astype(int) categories.head() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames. ###Code # drop the original categories column from `df` df = df.drop(['categories'],axis=1) df.head() # concatenate the original dataframe with the new `categories` dataframe df = pd.concat([df,categories],join = 'inner',axis=1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed. ###Code # check number of duplicates print (f'number of duplicates {df.duplicated().sum()}') # drop duplicates df = df.drop_duplicates() # check number of duplicates print (f'number of duplicates {df.duplicated().sum()}') ###Output number of duplicates 0 ###Markdown 7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code engine = create_engine('sqlite:///DisasterResponseDatabase.db') df.to_sql('DisasterResponseDatabaseTable', engine,if_exists='replace', index=False) ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code import pandas as pd import numpy as np # load dataset messages = pd.read_csv('disaster_messages.csv') categories = pd.read_csv('disaster_categories.csv') messages.head() categories.head() ###Output _____no_output_____ ###Markdown 1. Cleaning data > Number of Messages and Categories in this Experiment ###Code print(messages.shape) print(categories.shape) ###Output (26248, 4) (26248, 2) ###Markdown > Checking the null values ###Code messages.isnull().sum() categories.isnull().sum() messages[messages.original.isnull()==True] messages.isnull().sum() ###Output _____no_output_____ ###Markdown > Dropping duplicatesEliminating duplicates will not be identical to two movie at all ###Code sum(messages.duplicated()) sum(categories.duplicated()) messages[messages.id.duplicated()==True] # dropping ALL duplicte values messages.drop_duplicates(subset ="id", keep = False, inplace = True) messages.id.duplicated().sum() categories.drop_duplicates(subset ="id", keep = False, inplace = True) categories.duplicated().sum() ###Output _____no_output_____ ###Markdown 2. Dummy the `genre` categorical variables ###Code messages.genre.value_counts() # Dummy the categorical variables messages = pd.concat([messages.drop('genre', axis=1), pd.get_dummies(messages['genre'], prefix='genre', prefix_sep='_')], axis=1) messages.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code ids=categories.id.values categories_ = categories['categories'].str.split(pat=';',expand=True) categories_.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`. ###Code data = list() # select the index row of the categories dataframe for i in range(categories_.shape[0]): # convert each cell in row to dict row = dict((j[:-2], j[-1:]) for j in categories_.iloc[i,:]) data.append(row) categories_ = pd.DataFrame.from_dict(data) categories_ categories_['id'] = ids categories_.set_index('id') ###Output _____no_output_____ ###Markdown 5. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code df = pd.merge(left=messages, right=categories_, left_index=True, right_index=True, on = ['id'], how='left') df.head() df = df.dropna()# Drop any row with a missing value df.iloc[:,3:].columns # convert column from string to int for column in df.iloc[:,3:].columns: df[column] = df[column].astype(np.int) ###Output _____no_output_____ ###Markdown 6. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code import sqlite3 # connect to the data base conn = sqlite3.connect('DisasterResponse.db') # get a cursor cur = conn.cursor() # drop the test table in case it already exists cur.execute("DROP TABLE IF EXISTS merged") # commit changes made to the database conn.commit() # commit any changes and close the data base conn.close() from sqlalchemy import create_engine engine = create_engine('sqlite:///DisasterResponse.db') df.to_sql('merged', engine, index=False) ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code # import libraries import numpy as np import pandas as pd from sqlalchemy import create_engine # load messages dataset messages = pd.read_csv('disaster_messages.csv') messages.head() # load categories dataset categories = pd.read_csv('disaster_categories.csv') categories.head() ###Output _____no_output_____ ###Markdown 2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code # merge datasets df = pd.merge(messages,categories,on='id') df.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code # create a dataframe of the 36 individual category columns categories = df.categories.str.split(pat=';',expand=True) categories.head() # select the first row of the categories dataframe row = categories.head(1) # use this row to extract a list of new column names for categories. # one way is to apply a lambda function that takes everything # up to the second to last character of each string with slicing category_colnames = row.applymap(lambda x: x[:-2]).iloc[0, :].tolist() print(category_colnames) # rename the columns of `categories` categories.columns = category_colnames categories.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`. ###Code for column in categories: # set each value to be the last character of the string categories[column] = categories[column].str[-1] # convert column from string to numeric categories[column] = categories[column].astype(int) categories.head() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames. ###Code # drop the original categories column from `df` df.drop('categories', axis = 1, inplace = True) df.head() # concatenate the original dataframe with the new `categories` dataframe df = pd.concat([df, categories], axis = 1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed. ###Code # check number of duplicates df.duplicated().sum() # drop duplicates df.drop_duplicates(inplace = True) # check number of duplicates df.duplicated().sum() ###Output _____no_output_____ ###Markdown 7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code engine = create_engine('sqlite:///DisasterResponse.db') df.to_sql('messages_disaster', engine, index = False) ###Output _____no_output_____ ###Markdown ETL Pipeline Preparation 1. Import libraries and load datasets. ###Code # import libraries import pandas as pd import numpy as np from sqlalchemy import create_engine # load messages dataset messages = pd.read_csv('messages.csv') messages.head() # load categories dataset categories = pd.read_csv('categories.csv') categories.head() ###Output _____no_output_____ ###Markdown 2. Merge datasets. ###Code # merge datasets df = pd.merge(messages, categories, on = 'id') df.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns. ###Code # create a dataframe of the 36 individual category columns categories = df['categories'].str.split(';', expand = True) categories.head() # select the first row of the categories dataframe row = categories.iloc[0].values # use this row to extract a list of new column names for categories. category_colnames = [x.split('-')[0] for x in row] print(category_colnames) # rename the columns of `categories` categories.columns = category_colnames categories.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1. ###Code for column in categories: # set each value to be the last character of the string categories[column] = categories[column].str[-1] # convert column from string to numeric categories[column] = pd.to_numeric(categories[column]) categories.head() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns. ###Code # drop the original categories column from `df` df = df.drop('categories', axis = 1) # concatenate the original dataframe with the new `categories` dataframe df = pd.concat([df, categories], axis = 1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates. ###Code # check number of duplicates df.duplicated().sum() # drop duplicates df = df.drop_duplicates() # check number of duplicates df.duplicated().sum() ###Output _____no_output_____ ###Markdown 7. Save the clean dataset into an sqlite database. ###Code engine = create_engine('sqlite:///DisasterResponse.db') df.to_sql('df', engine, index=False) ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `disaster_messages.csv` into a dataframe and inspect the first few lines.- Load `disaster_categories.csv` into a dataframe and inspect the first few lines. ###Code # Import libraries import pandas as pd from sqlalchemy import create_engine # Load messages dataset messages = pd.read_csv('disaster_messages.csv') messages.head() # Load categories dataset categories = pd.read_csv('disaster_categories.csv') categories.head() ###Output _____no_output_____ ###Markdown 2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code # Merge datasets on common 'id' field df = pd.merge(messages, categories, left_on='id', right_on='id', how='outer') df.head() # Quick check of merged data print(df.shape) print(df.info()) print(df.isnull().sum()) ###Output (26386, 5) <class 'pandas.core.frame.DataFrame'> Int64Index: 26386 entries, 0 to 26385 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 26386 non-null int64 1 message 26386 non-null object 2 original 10246 non-null object 3 genre 26386 non-null object 4 categories 26386 non-null object dtypes: int64(1), object(4) memory usage: 1.2+ MB None id 0 message 0 original 16140 genre 0 categories 0 dtype: int64 ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code # Create a dataframe of the 36 individual category columns categories = df.categories.str.split(';', expand=True) categories.head() # Select the first row of the categories dataframe row = categories.iloc[0,:] # Use this row to extract a list of new column names for categories. # one way is to apply a lambda function that takes everything # up to the second to last character of each string with slicing category_colnames = row.map(lambda x: x[:-2]) print(category_colnames) # Rename the columns of `categories` categories.columns = category_colnames categories.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`. ###Code # Iterate through columns in categories DataFrame for column in categories.columns: # Set each value to be the last character of the string categories[column] = categories[column].str[-1] # Convert column from string to numeric categories[column] = pd.to_numeric(categories[column]) categories.head() # Check value counts in each column for col in categories.columns: print(f'{col} \n {categories[col].value_counts(dropna=False)}') # Replace 2s in the 'related' column with 1s and check value counts categories.loc[categories.related==2, 'related'] = 1 categories.related.value_counts() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames. ###Code # Drop the original categories column from `df` df.drop(columns=['categories'], axis=1, inplace=True) df.head() # concatenate the original dataframe with the new `categories` dataframe df = pd.concat([df, categories], axis=1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed. ###Code # Check number of duplicates df.duplicated().sum() # Drop duplicates print(df.shape) df.drop_duplicates(inplace=True) print(df.shape) # Check number of duplicates df.duplicated().sum() ###Output _____no_output_____ ###Markdown 7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code engine = create_engine('sqlite:///DisasterResponse.db') df.to_sql('messages', engine, index=False, if_exists='replace') ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code # import libraries import numpy as np import pandas as pd from sqlalchemy import create_engine ###Output _____no_output_____ ###Markdown ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines. ###Code # import libraries import pandas as pd #import sqlite3 from sqlalchemy import create_engine # load messages dataset messages = pd.read_csv('disaster_messages.csv') messages.head() # load categories dataset categories = pd.read_csv('disaster_categories.csv') categories.head() ###Output _____no_output_____ ###Markdown 2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps ###Code # merge datasets df = messages.merge(categories, how='outer',\ on=['id']) df.head() ###Output _____no_output_____ ###Markdown 3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names. ###Code # create a dataframe of the 36 individual category columns categories = df["categories"].str.split(';', -1, True) categories.head() # select the first row of the categories dataframe row = categories.iloc[0] # use this row to extract a list of new column names for categories. # one way is to apply a lambda function that takes everything # up to the second to last character of each string with slicing category_colnames = row.apply(lambda x : str(x)[:-2]) print(category_colnames) # rename the columns of `categories` categories.columns = category_colnames categories.head() ###Output _____no_output_____ ###Markdown 4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`. ###Code for column in categories: # set each value to be the last character of the string categories[column] = categories[column].str[-1:] # convert column from string to numeric categories[column] = pd.to_numeric(categories[column]) categories.head() ###Output _____no_output_____ ###Markdown 5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames. ###Code # drop the original categories column from `df` df = df.drop(['categories'], axis=1) df.head() # concatenate the original dataframe with the new `categories` dataframe df = pd.concat([df, categories], axis=1) df.head() ###Output _____no_output_____ ###Markdown 6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed. ###Code # check number of duplicates print(df.duplicated().sum()) # drop duplicates df = df.drop_duplicates() # check number of duplicates print(df.duplicated().sum()) ###Output 0 ###Markdown 7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below. ###Code engine = create_engine('sqlite:///FigureEightProject.db') df.to_sql('messages', engine, index=False) ###Output _____no_output_____
ryan-1994/Ormsby_wavelet_reproducibility.ipynb
###Markdown What is an Ormsby wavelet? Let's try to reproduce an Ormsby wavelet (aka trapezoidal spectrum wavelet). This is a low-cut / low-pass / high-pass / high-cut filter.The paper most people seem to reference is [Ryan 1994](https://csegrecorder.com/articles/view/ricker-ormsby-klander-butterworth-a-choice-of-wavelets), Ricker, Ormsby, Klander, Butterworth – A Choice of Wavelets, CSEG Recorder, vol 19, no 07.It contains this figure:I take this to be the correct wavelet. `bruges`If you don't already have `bruges`, you may need to `pip install bruges`installing it into your Python environment, and then re-open this notebook.https://github.com/agile-geoscience/bruges ###Code import numpy as np %matplotlib inline import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Make a set of points corresponding to an Ormsby bandpass filter ###Code from bruges.filters import ormsby ###Output _____no_output_____ ###Markdown Bruges has a function for creating an Ormsby wavelet. It was coded from the equation in [Ryan 1994](https://csegrecorder.com/articles/view/ricker-ormsby-klander-butterworth-a-choice-of-wavelets), although that equation contains an error in my opinion (it should be $ft$ inside the sinc function, not $\pi ft$). The error is repeated in [this article](file:///home/matt/Downloads/IJET-10980.pdf) (which has a lot of other problems besides this one!) and on SEG Wiki (see below).We need to pass in the the duration, the sample rate dt and the four bandpass frequencies, freqs as a list. ###Code freqs = [5, 10, 40, 45] orms, tw = ormsby(0.4, dt=0.004, f=freqs, return_t=True) ###Output _____no_output_____ ###Markdown Let's plot the Ormsby wavelet in the time domain. ###Code fig, ax = plt.subplots(figsize=(8,3)) ax.plot(tw, orms, 'bo-', lw=2, alpha=0.75, label='Ormsby ({}-{}-{}-{} Hz)'.format(*freqs)) ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.grid() plt.show() ###Output _____no_output_____ ###Markdown This matches the figure in Ryan 1994: ###Code from scipy.interpolate import interp1d def smoothify(t, w): func = interp1d(t, w, kind='cubic') tnew = np.linspace(t.min(), t.max(), 1000) return tnew, func(tnew) fig, ax = plt.subplots(figsize=(15, 6)) # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. ax.plot(*smoothify(tw, orms), 'b', lw=1, alpha=0.75) # Plot the points we computed. ax.plot(tw, orms, 'bo', label='Bruges') # Trimmings. ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown Ryan 1994, and SEG WikiThis is the one given in Ryan 1994, which seems to contain an error (there's an extra `np.pi` in the `sinc` function). [The SEG Wiki version](https://wiki.seg.org/wiki/Ormsby_wavelet) also has a factor $A$ on the first term, but it is not defined so I ignored it:The resulting frequency is too high. ###Code from collections import namedtuple import numpy as np def ormsby2(duration, dt, f, return_t=False): """ Implementation on SEG Wiki (with mysterious multipler A = 1). Apart from the multiplier A, that implementation is similar to the one in Ryan 1994, including the 'extra' pi inside the sinc function. """ t = np.arange(-duration/2, duration/2, dt) f1, f2, f3, f4 = f def numerator(f, t): """I canceled the pi's.""" return (np.sinc(np.pi* f * t)**2) * np.pi * f**2 w = ((numerator(f4, t)/(f4 - f3)) - (numerator(f3, t)/(f4 - f3)) - (numerator(f2, t)/(f2 - f1)) + (numerator(f1, t)/(f2 - f1))) w /= np.amax(w) return w, t orms2, tw2 = ormsby2(0.4, dt=0.004, f=freqs, return_t=True) plt.plot(tw2, orms) plt.plot(tw2, orms2) ###Output _____no_output_____ ###Markdown The result certainly does not match the figure: ###Code fig, ax = plt.subplots(figsize=(15, 6)) # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. ax.plot(*smoothify(tw, orms), 'b', lw=1, alpha=0.75) ax.plot(*smoothify(tw2, orms2), 'r', lw=1, alpha=0.75) # Plot the points we computed. ax.plot(tw, orms, 'bo', label='Bruges') ax.plot(tw2, orms2, 'ro', label='Ryan 94') # Trimmings. ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown Montclair State UniversitywhereThe result is roughly the right shape and frequency, but does not match the figure in Ryan. ###Code def ormsby3(duration, dt, f, return_t=True): """ Implementation from Montclair State University https://sites.google.com/site/cwilshusenreu2012/research-updates """ t = np.arange(-duration/2, duration/2, dt) f1, f2, f3, f4 = f c4 = np.pi * f4**2 / (f4 - f3) c3 = np.pi * f3**2 / (f4 - f3) c2 = np.pi * f2**2 / (f2 - f1) c1 = np.pi * f1**2 / (f2 - f1) mul = 1 / (c4 - c3 - c2 + c1) def part(c, f, t): return c * (np.sin(np.pi * f * t) / (np.pi * f * t))**2 w = mul * (part(c4, f4, t) - part(c3, f3, t) - part(c2, f2, t) - part(c1, f1, t)) w /= np.amax(w) return w, t orms3, tw3 = ormsby3(0.4, dt=0.004, f=freqs, return_t=True) plt.plot(tw3, orms) plt.plot(tw3, orms3) ###Output _____no_output_____ ###Markdown `seismic.jl` ###Code def ormsby4(duration, dt, f, return_t=True): """ Implementation from Montclair State University https://sites.google.com/site/cwilshusenreu2012/research-updates """ t = np.arange(-duration/2, duration/2, dt) f1, f2, f3, f4 = f fc = (f2+f3)/2.0 nw = 2.2/(fc*dt) nc = np.floor(nw/2) nw = 2*nc + 1 a4 = (np.pi*f4)**2/(np.pi*(f4-f3)) a3 = (np.pi*f3)**2/(np.pi*(f4-f3)) a2 = (np.pi*f2)**2/(np.pi*(f2-f1)) a1 = (np.pi*f1)**2/(np.pi*(f2-f1)) u = a4*(np.sinc(f4*t))**2 - a3*(np.sinc(f3*t))**2 v = a2*(np.sinc(f2*t))**2 - a1*(np.sinc(f1*t))**2 w = u - v w = w*np.hamming(w.size)/np.max(w) return w, t orms4, tw4 = ormsby4(0.4, dt=0.004, f=freqs, return_t=True) ###Output _____no_output_____ ###Markdown The result matches the Ryan figure almost perfectly... apart from the application of the Hamming window. (Without that, it's the same wavelet.) ###Code plt.plot(tw3, orms) plt.plot(tw4, orms4) ###Output _____no_output_____ ###Markdown KIT / GPIAG / SOFI2DGPIAG / SOFI2D software from geophysical institut of the KIT (Karlsruhe Institute of Technology)https://git.scc.kit.edu/GPIAG-Software/SOFI2D/blob/Release/mfiles/wavelet_gen.m ###Code def ormsby5(duration, dt, f, return_t=True): """ Implementation from KIT. """ fo1, fo2, fo3, fo4 = f t = np.arange(0, duration, dt) - duration / 2 tau1 = ((np.pi*fo4)**2)/((np.pi*fo4)-np.pi*fo3) tau2 = ((np.pi*fo3)**2)/((np.pi*fo4)-np.pi*fo3) tau3 = ((np.pi*fo2)**2)/((np.pi*fo2)-np.pi*fo1) tau4 = ((np.pi*fo1)**2)/((np.pi*fo2)-np.pi*fo1) # find sample point at which t-tshift is a minimum minshift, ishift = 0, len(t)//2 # calculate sinc function x = fo4*t sinc1 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc1[ishift] = 1.0 x = fo3*t sinc2 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc2[ishift] = 1.0 x = fo2*t sinc3 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc3[ishift] = 1.0 x = fo1*t sinc4 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc4[ishift] = 1.0 # calculate Ormsby signal ft = ((tau1 * sinc1**2) - (tau2 * sinc2**2)) - ((tau3 * sinc3**2) - (tau4 * sinc4**2)) ft = ft / max(ft) return ft, t orms5, tw5 = ormsby5(0.4, dt=0.004, f=freqs, return_t=True) ###Output _____no_output_____ ###Markdown The result exactly matches the Ryan wavelet. ###Code np.allclose(orms, orms5) plt.plot(tw3, orms) plt.plot(tw5, orms5) ###Output _____no_output_____ ###Markdown Comparison ###Code fig, ax = plt.subplots(figsize=(15, 6)) # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. ax.plot(*smoothify(tw, orms), 'b', lw=1, alpha=0.75) ax.plot(*smoothify(tw2, orms2), 'r', lw=1, alpha=0.75) ax.plot(*smoothify(tw3, orms3), 'g', lw=1, alpha=0.75) ax.plot(*smoothify(tw4, orms4), 'c', lw=1, alpha=0.75) ax.plot(*smoothify(tw5, orms5), 'y', lw=1, alpha=0.75) # Plot the points. ax.plot(tw, orms, 'bo', label='Bruges') ax.plot(tw2, orms2, 'ro', label='Ryan 94') ax.plot(tw3, orms3, 'go', label='Montclair') ax.plot(tw4, orms4, 'co', label='seismic.jl') ax.plot(tw5, orms5, 'yo', label='KIT') # Trimmings. ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown Miong & Stewart 2007Soo-Kyung Miong, Robert R. Stewart and Joe Wong, Characterizing the near surface with VSP and well logs, CREWES Research Report — Volume 19 (2007).The authors do not give an implementation, but I digitized the Ormsby wavelet in their figure 7 using https://automeris.io/WebPlotDigitizer/ ###Code miong = np.array([[-0.0385086,-0.0024259], [-0.0317604,-0.0040431], [-0.0267726,-0.0040431], [-0.0215501,-0.0040431], [-0.0193203,-0.0072776], [-0.0180880,-0.0040431], [-0.0162103,-0.0072776], [-0.0148020,-0.0032345], [-0.0135110,-0.0097035], [-0.0123961,-0.0040431], [-0.0116919,-0.0040431], [-0.0103716,-0.0129380], [-0.0089340,0.0000000], [-0.0074670,-0.0177898], [-0.0068215,-0.0056604], [-0.0061174,0.0080863], [-0.0049438,-0.0307278], [-0.0042983,-0.0137466], [-0.0034768,0.0291105], [-0.0019511,-0.0800539], [-0.0013056,-0.0129380], [-0.0003667,0.3000000], [0.0003961,0.3000000], [0.0014523,-0.0226415], [0.0019804,-0.0808625], [0.0028606,0.0048518], [0.0034474,0.0299191], [0.0045037,-0.0266846], [0.0052078,-0.0185984], [0.0061467,0.0080863], [0.0069095,-0.0048518], [0.0074963,-0.0177898], [0.0089633,0.0000000], [0.0103716,-0.0129380], [0.0114279,-0.0048518], [0.0125428,-0.0048518], [0.0134230,-0.0097035], [0.0147139,-0.0040431], [0.0162396,-0.0072776], [0.0179413,-0.0040431], [0.0192323,-0.0072776], [0.0208753,-0.0040431], [0.0253936,-0.0040431], [0.0306161,-0.0040431], [0.0345477,-0.0024259], [0.0389487,-0.0024259]]) t6, w6 = smoothify(*miong.T) plt.plot(t6, w6) ###Output _____no_output_____ ###Markdown The authors don't give the parameters of this wavelet, so let's try to match it. ###Code orms0, tw0 = ormsby(0.08, dt=0.00008, f=[8, 24, 350, 400], return_t=True) plt.figure(figsize=(12, 8)) plt.plot(t6, 3*w6, lw=3) plt.plot(tw0, orms0, lw=2) ###Output _____no_output_____ ###Markdown So it might be a reasonable implementation, although this particular wavelet is very high bandwidth, but we can't know for sure. Check spectrumsCheck the spectrums of the wavelets: ###Code from scipy.signal import welch plt.figure(figsize=(15, 4)) plt.plot(*welch(orms, fs=250, nperseg=100), c='b', lw=4, label='bruges') plt.plot(*welch(orms2, fs=250, nperseg=100), label='SEG Wiki') plt.plot(*welch(orms3, fs=250, nperseg=100), label='Montclair') plt.plot(*welch(orms4, fs=250, nperseg=100), label='seismic.jl') plt.plot(*welch(orms5, fs=250, nperseg=100), label='KIT') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown What is an Ormsby wavelet? Let's try to reproduce an Ormsby wavelet (aka trapezoidal spectrum wavelet). This is a low-cut / low-pass / high-pass / high-cut filter.The original reference is Ormsby, J (1961). Design of numerical filters with applications to missile data processing. Journal of the ACM 8 (3), p 440–466. https://doi.org/10.1145/321075.321087The paper most people seem to reference is [Ryan 1994](https://csegrecorder.com/articles/view/ricker-ormsby-klander-butterworth-a-choice-of-wavelets), Ricker, Ormsby, Klander, Butterworth – A Choice of Wavelets, CSEG Recorder, vol 19, no 07.It contains this figure:I take this to be the correct wavelet. `bruges` time domainIf you don't already have [`bruges`](https://bruges.readthedocs.io/), you may need to pip install brugesinstalling it into your Python environment, and then re-open this notebook.https://github.com/agile-geoscience/bruges ###Code import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Make a set of points corresponding to an Ormsby bandpass filter ###Code from bruges.filters import ormsby ###Output _____no_output_____ ###Markdown Bruges has an analytic time-domain function for creating an Ormsby wavelet. It was coded from the equation in [Ryan 1994](https://csegrecorder.com/articles/view/ricker-ormsby-klander-butterworth-a-choice-of-wavelets).We need to pass in the the duration, the sample rate dt and the four bandpass frequencies, freqs as a list. ###Code freqs = [5, 10, 40, 45] orms, tw = ormsby(0.4, dt=0.004, f=freqs, return_t=True, sym=True) ###Output _____no_output_____ ###Markdown Let's plot the Ormsby wavelet in the time domain. ###Code fig, ax = plt.subplots(figsize=(8,3)) ax.plot(tw, orms, 'bo-', lw=2, alpha=0.75, label='Ormsby ({}-{}-{}-{} Hz)'.format(*freqs)) ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.grid() plt.show() ###Output _____no_output_____ ###Markdown This matches the figure in Ryan 1994: ###Code from scipy.interpolate import interp1d def smoothify(t, w): func = interp1d(t, w, kind='cubic') tnew = np.linspace(t.min(), t.max(), 1000) return tnew, func(tnew) fig, ax = plt.subplots(figsize=(15, 6)) # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. ax.plot(*smoothify(tw, orms), 'b', lw=1) # Plot the points we computed. ax.plot(tw, orms, 'bo', label='Bruges') # Trimmings. ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown New `bruges` FFT implementationYou will need `bruges` version 0.4.2 for this. ###Code from bruges.filters import ormsby_fft orms_fft, tw_fft = ormsby_fft(0.4, dt=0.004, f=freqs) plt.figure(figsize=(15, 6)) plt.plot(tw, orms, label='bruges') plt.plot(tw_fft, orms_fft, label='bruges FFT') plt.legend() ###Output _____no_output_____ ###Markdown The advantage of the FFT method is that we can change the amount of energy going into the low vs the high frequencies. For example, this Ormsby is 'pink' — it has more low-frequency energy than high. ###Code orms_pink, _ = ormsby_fft(0.4, dt=0.004, f=freqs, P=(0, -5)) plt.figure(figsize=(15, 6)) plt.plot(tw_fft, orms_fft, label='White') plt.plot(tw_fft, orms_pink, label='Pink') plt.legend() from scipy.signal import welch plt.figure(figsize=(15, 4)) plt.plot(*welch(orms_fft, fs=250, nperseg=100, scaling='spectrum'), c='lightgray', lw=4, label='bruges FFT, white') plt.plot(*welch(orms_pink, fs=250, nperseg=100, scaling='spectrum'), c='pink', lw=4, label='bruges FFT, pink') plt.legend() plt.grid(c='k', alpha=0.1) plt.title('Power spectrum') plt.show() ###Output _____no_output_____ ###Markdown OpenGeo SolutionsThis is from their web app, https://www.opengeosolutions.com/technologies/blockfiltertechUsing parameters: 5-10-40-45, 0% taper, 0-400 ms. ###Code import pandas as pd url = "https://raw.githubusercontent.com/softwareunderground/repro-zoo/master/ryan-1994/Traces_20210209_193429.csv" df = pd.read_csv(url) df.head() tw1 = df['Time (ms)'] / 1000 - 0.215 # This is weird. orms1 = df['Filter Impulse Response'] plt.figure(figsize=(15, 6)) plt.plot(tw, orms, label='bruges') plt.plot(tw1, orms1, label='ogs') plt.legend() ###Output _____no_output_____ ###Markdown Seems like they are applying some sort of taper. Ryan 1994, and SEG WikiThis is the one given in Ryan 1994. It seems like there's an 'extra' `np.pi` in the `sinc` function — I originally thought this was an error, but it turns out that NumPy's `sinc()` just includes this $\pi$ term.[The SEG Wiki version](https://wiki.seg.org/wiki/Ormsby_wavelet) also had a factor $A$ on the first term, but it is not defined so I ignored it. ###Code from collections import namedtuple import numpy as np def sinc(x): """ Conventional (non-signal-processing) definition of sinc function. """ return np.sin(x) / x def ormsby2(duration, dt, f): """ Implementation on SEG Wiki (with mysterious multipler A = 1). Apart from the multiplier A, that implementation is similar to the one in Ryan 1994, including the 'extra' pi inside the sinc function. """ t = np.arange(-duration/2, dt+duration/2, dt) # Get 'extra' sample. f1, f2, f3, f4 = f def numerator(f, t): """I canceled the pi's.""" return (sinc(np.pi* f * t)**2) * np.pi * f**2 w = ((numerator(f4, t)/(f4 - f3)) - (numerator(f3, t)/(f4 - f3)) - (numerator(f2, t)/(f2 - f1)) + (numerator(f1, t)/(f2 - f1))) w /= np.amax(w) return w, t orms2, tw2 = ormsby2(0.4, dt=0.004, f=freqs) plt.plot(tw, orms) plt.plot(tw2, orms2, '--') ###Output _____no_output_____ ###Markdown This new result matches the figure: ###Code fig, ax = plt.subplots(figsize=(15, 6), facecolor='white') # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. # ax.plot(*smoothify(tw, orms), 'g', lw=3, alpha=0.75, label='Bruges') ax.plot(*smoothify(tw2, orms2), 'r--', lw=2, alpha=0.75, label='Ryan 1994') # Plot the points we computed. # ax.plot(tw, orms, 'bo', label='Bruges') ax.plot(tw2, orms2, 'ro', label='Ryan 94') # Trimmings. ax.legend(loc=1, fontsize=12) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown Montclair State UniversitywhereThe result is roughly the right shape and frequency, but does not match the figure in Ryan. ###Code def ormsby3(duration, dt, f, return_t=True): """ Implementation from Montclair State University https://sites.google.com/site/cwilshusenreu2012/research-updates """ t = np.arange(-duration/2, dt+duration/2, dt) # Get 'extra' sample. f1, f2, f3, f4 = f c4 = np.pi * f4**2 / (f4 - f3) c3 = np.pi * f3**2 / (f4 - f3) c2 = np.pi * f2**2 / (f2 - f1) c1 = np.pi * f1**2 / (f2 - f1) mul = 1 / (c4 - c3 - c2 + c1) def part(c, f, t): return c * (np.sin(np.pi * f * t) / (np.pi * f * t))**2 w = mul * (part(c4, f4, t) - part(c3, f3, t) - part(c2, f2, t) - part(c1, f1, t)) w /= np.amax(w) return w, t orms3, tw3 = ormsby3(0.4, dt=0.004, f=freqs, return_t=True) plt.plot(tw, orms) plt.plot(tw3, orms3) ###Output _____no_output_____ ###Markdown `seismic.jl`From here: http://seismicjulia.github.io/Seismic.jl/Wavelets/page1/ ###Code def ormsby4(duration, dt, f): """ From seismic.jl """ t = np.arange(-duration/2, dt+duration/2, dt) # Get 'extra' sample. f1, f2, f3, f4 = f fc = (f2+f3)/2.0 nw = 2.2/(fc*dt) nc = np.floor(nw/2) nw = 2*nc + 1 a4 = (np.pi*f4)**2/(np.pi*(f4-f3)) a3 = (np.pi*f3)**2/(np.pi*(f4-f3)) a2 = (np.pi*f2)**2/(np.pi*(f2-f1)) a1 = (np.pi*f1)**2/(np.pi*(f2-f1)) u = a4*(np.sinc(f4*t))**2 - a3*(np.sinc(f3*t))**2 v = a2*(np.sinc(f2*t))**2 - a1*(np.sinc(f1*t))**2 w = u - v w = w*np.hamming(w.size)/np.max(w) return w, t orms4, tw4 = ormsby4(0.4, dt=0.004, f=freqs) ###Output _____no_output_____ ###Markdown The result matches the Ryan figure almost perfectly... apart from the application of the Hamming window. (Without that, it's the same wavelet.) ###Code plt.plot(tw, orms) plt.plot(tw4, orms4) ###Output _____no_output_____ ###Markdown KIT / GPIAG / SOFI2DGPIAG / SOFI2D software from geophysical institut of the KIT (Karlsruhe Institute of Technology)https://git.scc.kit.edu/GPIAG-Software/SOFI2D/blob/Release/mfiles/wavelet_gen.mNB MATLAB also uses the extra pi term in the sinc function, so it should behave like NumPy. ###Code def ormsby5(duration, dt, f): """ Implementation from KIT. """ fo1, fo2, fo3, fo4 = f t = np.arange(0, dt+duration, dt) - duration / 2 tau1 = ((np.pi*fo4)**2)/((np.pi*fo4)-np.pi*fo3) tau2 = ((np.pi*fo3)**2)/((np.pi*fo4)-np.pi*fo3) tau3 = ((np.pi*fo2)**2)/((np.pi*fo2)-np.pi*fo1) tau4 = ((np.pi*fo1)**2)/((np.pi*fo2)-np.pi*fo1) # find sample point at which t-tshift is a minimum minshift, ishift = 0, len(t)//2 # calculate sinc function x = fo4*t sinc1 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc1[ishift] = 1.0 x = fo3*t sinc2 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc2[ishift] = 1.0 x = fo2*t sinc3 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc3[ishift] = 1.0 x = fo1*t sinc4 = np.sin(np.pi*x)/(np.pi*x + 1e-9) sinc4[ishift] = 1.0 # calculate Ormsby signal ft = ((tau1 * sinc1**2) - (tau2 * sinc2**2)) - ((tau3 * sinc3**2) - (tau4 * sinc4**2)) ft = ft / max(ft) return ft, t orms5, tw5 = ormsby5(0.4, dt=0.004, f=freqs) ###Output _____no_output_____ ###Markdown The result exactly matches the Ryan wavelet. ###Code np.allclose(orms, orms5) plt.figure(figsize=(15, 3)) plt.plot(tw, orms) plt.plot(tw5, orms5) ###Output _____no_output_____ ###Markdown Comparison ###Code fig, ax = plt.subplots(figsize=(15, 6)) # Show the background plot. img = plt.imread('Ormsby_wavelet_data-area.png') ax.imshow(img, extent=[-0.2, 0.2, -0.6, 1.0], aspect='auto') # Plot the smoothed waveforms. ax.plot(*smoothify(tw, orms), 'b', lw=1, alpha=0.75) ax.plot(*smoothify(tw2, orms2), 'r', lw=1, alpha=0.75) ax.plot(*smoothify(tw3, orms3), 'g', lw=1, alpha=0.75) ax.plot(*smoothify(tw4, orms4), 'c', lw=1, alpha=0.75) ax.plot(*smoothify(tw5, orms5), 'y', lw=1, alpha=0.75) # Plot the points. ax.plot(tw, orms, 'bo', label='Bruges') ax.plot(tw2, orms2, 'ro', label='Ryan 94') ax.plot(tw3, orms3, 'go', label='Montclair') ax.plot(tw4, orms4, 'co', label='seismic.jl') ax.plot(tw5, orms5, 'yo', label='KIT') # Trimmings. ax.legend(loc=1) ax.set_xlim(-0.2,0.2) ax.set_ylim(-0.6, 1.0) ax.set_xlabel('time (s)', fontsize=14) ax.set_ylabel('amplitude', fontsize=14) ax.set_title("Ormsby {}-{}-{}-{}".format(*freqs)) ax.grid(c='k', alpha=0.2) plt.show() ###Output _____no_output_____ ###Markdown Miong & Stewart 2007Soo-Kyung Miong, Robert R. Stewart and Joe Wong, Characterizing the near surface with VSP and well logs, CREWES Research Report — Volume 19 (2007).The authors do not give an implementation, but I digitized the Ormsby wavelet in their figure 7 using https://automeris.io/WebPlotDigitizer/ ###Code miong = np.array([[-0.0385086,-0.0024259], [-0.0317604,-0.0040431], [-0.0267726,-0.0040431], [-0.0215501,-0.0040431], [-0.0193203,-0.0072776], [-0.0180880,-0.0040431], [-0.0162103,-0.0072776], [-0.0148020,-0.0032345], [-0.0135110,-0.0097035], [-0.0123961,-0.0040431], [-0.0116919,-0.0040431], [-0.0103716,-0.0129380], [-0.0089340,0.0000000], [-0.0074670,-0.0177898], [-0.0068215,-0.0056604], [-0.0061174,0.0080863], [-0.0049438,-0.0307278], [-0.0042983,-0.0137466], [-0.0034768,0.0291105], [-0.0019511,-0.0800539], [-0.0013056,-0.0129380], [-0.0003667,0.3000000], [0.0003961,0.3000000], [0.0014523,-0.0226415], [0.0019804,-0.0808625], [0.0028606,0.0048518], [0.0034474,0.0299191], [0.0045037,-0.0266846], [0.0052078,-0.0185984], [0.0061467,0.0080863], [0.0069095,-0.0048518], [0.0074963,-0.0177898], [0.0089633,0.0000000], [0.0103716,-0.0129380], [0.0114279,-0.0048518], [0.0125428,-0.0048518], [0.0134230,-0.0097035], [0.0147139,-0.0040431], [0.0162396,-0.0072776], [0.0179413,-0.0040431], [0.0192323,-0.0072776], [0.0208753,-0.0040431], [0.0253936,-0.0040431], [0.0306161,-0.0040431], [0.0345477,-0.0024259], [0.0389487,-0.0024259]]) t6, w6 = smoothify(*miong.T) plt.plot(t6, w6) ###Output _____no_output_____ ###Markdown The authors don't give the parameters of this wavelet, so let's try to match it. ###Code orms0, tw0 = ormsby(0.078, dt=0.00008, f=[8, 24, 325, 400], return_t=True) plt.figure(figsize=(12, 8)) plt.plot(t6, 3*w6, lw=3) plt.plot(tw0, orms0, lw=2) ###Output /Users/matt/opt/miniconda3/envs/py39/lib/python3.9/site-packages/bruges/filters/wavelets.py:425: FutureWarning: In future releases, the default legacy behaviour will be removed. We recommend setting sym=True. This will be the default in v0.5+. t = _get_time(duration, dt, sym=sym) ###Markdown So it might be a reasonable implementation, although this particular wavelet is very high bandwidth, but we can't know for sure. PetrelSee comments in Software Underground: https://swung.slack.com/archives/C094GV18T/p1612860192157300Long story short: they seem to apply a triangular taper, a bit like what `seismic.jl` does with the Hamming. Check spectrumsCheck the spectrums of the wavelets: ###Code from scipy.signal import welch plt.figure(figsize=(15, 4)) plt.semilogy(*welch(orms, fs=250, nperseg=100, scaling='spectrum'), c='b', lw=4, label='bruges') plt.semilogy(*welch(orms2, fs=250, nperseg=100, scaling='spectrum'), label='SEG Wiki') plt.semilogy(*welch(orms3, fs=250, nperseg=100, scaling='spectrum'), label='Montclair') plt.semilogy(*welch(orms4, fs=250, nperseg=100, scaling='spectrum'), label='seismic.jl') plt.semilogy(*welch(orms5, fs=250, nperseg=100, scaling='spectrum'), label='KIT') plt.legend() plt.grid(c='k', alpha=0.1) plt.title('Power spectrum') plt.show() ###Output _____no_output_____ ###Markdown Let's just compare an Ormsby to a Ricker: ###Code from bruges.filters import ricker rick, tw = ricker(0.4, dt=0.004, f=25, return_t=True) plt.figure(figsize=(15, 4)) plt.semilogy(*welch(orms, fs=250, nperseg=100, scaling='spectrum'), c='C0', lw=4, label='Ormsby') plt.semilogy(*welch(rick, fs=250, nperseg=100, scaling='spectrum'), c='C1', lw=4, label='Ricker') plt.legend() plt.grid(c='k', alpha=0.1) plt.title('Power spectrum') plt.show() from bruges.filters import ricker rick, tw = ricker(0.4, dt=0.004, f=25, return_t=True) f, Pxx_orms = welch(orms, fs=250, nperseg=100, scaling='spectrum') _, Pxx_rick = welch(rick, fs=250, nperseg=100, scaling='spectrum') fig, axs = plt.subplots(ncols=2, figsize=(15, 4), facecolor='white') ax = axs[0] ax.plot(f, np.sqrt(Pxx_orms), c='C0', lw=4, label='Ormsby', alpha=0.2) ax.plot(f, np.sqrt(Pxx_rick), c='C1', lw=4, label='Ricker') ax.set_title('Ricker wavelet, 25 Hz') ax.set_xlim(0, 80) ax.set_xlabel('Frequency [Hz]') ax.grid(c='k', alpha=0.1) ax = axs[1] ax.plot(f, np.sqrt(Pxx_rick), c='C1', lw=4, label='Ricker', alpha=0.2) ax.plot(f, np.sqrt(Pxx_orms), c='C0', lw=4, label='Ormsby') ax.set_title('Ormsby wavelet, 5–10–40–45 Hz') ax.set_xlim(0, 80) ax.set_xlabel('Frequency [Hz]') ax.grid(c='k', alpha=0.1) plt.suptitle('Amplitude spectrum', size=20) plt.show() ###Output /Users/matt/opt/miniconda3/envs/py39/lib/python3.9/site-packages/bruges/filters/wavelets.py:272: FutureWarning: In future releases, the default legacy behaviour will be removed. We recommend setting sym=True. This will be the default in v0.5+. t = _get_time(duration, dt, sym=sym)